Inverse compton light source: a compact design proposal
Energy Technology Data Exchange (ETDEWEB)
Deitrick, Kirsten Elizabeth [Old Dominion Univ., Norfolk, VA (United States); Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)
2017-05-01
In the last decade, there has been an increasing demand for a compact Inverse Compton Light Source (ICLS) which is capable of producing high-quality X-rays by colliding an electron beam and a high-quality laser. It is only in recent years when both SRF and laser technology have advanced enough that compact sources can approach the quality found at large installations such as the Advanced Photon Source at Argonne National Laboratory. Previously, X-ray sources were either high flux and brilliance at a large facility or many orders of magnitude lesser when produced by a bremsstrahlung source. A recent compact source was constructed by Lyncean Technologies using a storage ring to produce the electron beam used to scatter the incident laser beam. By instead using a linear accelerator system for the electron beam, a significant increase in X-ray beam quality is possible, though even subsequent designs also featuring a storage ring offer improvement. Preceding the linear accelerator with an SRF reentrant gun allows for an extremely small transverse emittance, increasing the brilliance of the resulting X-ray source. In order to achieve sufficiently small emittances, optimization was done regarding both the geometry of the gun and the initial electron bunch distribution produced off the cathode. Using double-spoke SRF cavities to comprise the linear accelerator allows for an electron beam of reasonable size to be focused at the interaction point, while preserving the low emittance that was generated by the gun. An aggressive final focusing section following the electron beam's exit from the accelerator produces the small spot size at the interaction point which results in an X-ray beam of high flux and brilliance. Taking all of these advancements together, a world class compact X-ray source has been designed. It is anticipated that this source would far outperform the conventional bremsstrahlung and many other compact ICLSs, while coming closer to performing at the
Inverse comptonization vs. thermal synchrotron
International Nuclear Information System (INIS)
Fenimore, E.E.; Klebesadel, R.W.; Laros, J.G.
1983-01-01
There are currently two radiation mechanisms being considered for gamma-ray bursts: thermal synchrotron and inverse comptonization. They are mutually exclusive since thermal synchrotron requires a magnetic field of approx. 10 12 Gauss whereas inverse comptonization cannot produce a monotonic spectrum if the field is larger than 10 11 and is too inefficient relative to thermal synchrotron unless the field is less than 10 9 Gauss. Neither mechanism can explain completely the observed characteristics of gamma-ray bursts. However, we conclude that thermal synchrotron is more consistent with the observations if the sources are approx. 40 kpc away whereas inverse comptonization is more consistent if they are approx. 300 pc away. Unfortunately, the source distance is still not known and, thus, the radiation mechanism is still uncertain
Analytical description of photon beam phase spaces in inverse Compton scattering sources
Directory of Open Access Journals (Sweden)
C. Curatolo
2017-08-01
Full Text Available We revisit the description of inverse Compton scattering sources and the photon beams generated therein, emphasizing the behavior of their phase space density distributions and how they depend upon those of the two colliding beams of electrons and photons. The main objective is to provide practical formulas for bandwidth, spectral density, brilliance, which are valid in general for any value of the recoil factor, i.e. both in the Thomson regime of negligible electron recoil, and in the deep Compton recoil dominated region, which is of interest for gamma-gamma colliders and Compton sources for the production of multi-GeV photon beams. We adopt a description based on the center of mass reference system of the electron-photon collision, in order to underline the role of the electron recoil and how it controls the relativistic Doppler/boost effect in various regimes. Using the center of mass reference frame greatly simplifies the treatment, allowing us to derive simple formulas expressed in terms of rms momenta of the two colliding beams (emittance, energy spread, etc. and the collimation angle in the laboratory system. Comparisons with Monte Carlo simulations of inverse Compton scattering in various scenarios are presented, showing very good agreement with the analytical formulas: in particular we find that the bandwidth dependence on the electron beam emittance, of paramount importance in Thomson regime, as it limits the amount of focusing imparted to the electron beam, becomes much less sensitive in deep Compton regime, allowing a stronger focusing of the electron beam to enhance luminosity without loss of mono-chromaticity. A similar effect occurs concerning the bandwidth dependence on the frequency spread of the incident photons: in deep recoil regime the bandwidth comes out to be much less dependent on the frequency spread. The set of formulas here derived are very helpful in designing inverse Compton sources in diverse regimes, giving a
Development and characterization of a tunable ultrafast X-ray source via inverse-Compton-scattering
International Nuclear Information System (INIS)
Jochmann, Axel
2014-01-01
Ultrashort, nearly monochromatic hard X-ray pulses enrich the understanding of the dynamics and function of matter, e.g., the motion of atomic structures associated with ultrafast phase transitions, structural dynamics and (bio)chemical reactions. Inverse Compton backscattering of intense laser pulses from relativistic electrons not only allows for the generation of bright X-ray pulses which can be used in a pump-probe experiment, but also for the investigation of the electron beam dynamics at the interaction point. The focus of this PhD work lies on the detailed understanding of the kinematics during the interaction of the relativistic electron bunch and the laser pulse in order to quantify the influence of various experiment parameters on the emitted X-ray radiation. The experiment was conducted at the ELBE center for high power radiation sources using the ELBE superconducting linear accelerator and the DRACO Ti:sapphire laser system. The combination of both these state-of-the-art apparatuses guaranteed the control and stability of the interacting beam parameters throughout the measurement. The emitted X-ray spectra were detected with a pixelated detector of 1024 by 256 elements (each 26μm by 26μm) to achieve an unprecedented spatial and energy resolution for a full characterization of the emitted spectrum to reveal parameter influences and correlations of both interacting beams. In this work the influence of the electron beam energy, electron beam emittance, the laser bandwidth and the energy-anglecorrelation on the spectra of the backscattered X-rays is quantified. A rigorous statistical analysis comparing experimental data to ab-initio 3D simulations enabled, e.g., the extraction of the angular distribution of electrons with 1.5% accuracy and, in total, provides predictive capability for the future high brightness hard X-ray source PHOENIX (Photon electron collider for Narrow bandwidth Intense X-rays) and potential all optical gamma-ray sources. The results
Compact FEL-driven inverse compton scattering gamma-ray source
Energy Technology Data Exchange (ETDEWEB)
Placidi, M. [Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Di Mitri, S., E-mail: simone.dimitri@elettra.eu [Elettra - Sincrotrone Trieste S.C.p.A., 34149 Basovizza, Trieste (Italy); Pellegrini, C. [SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); University of California, Los Angeles, CA 90095 (United States); Penn, G. [Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)
2017-05-21
Many research and applications areas require photon sources capable of producing gamma-ray beams in the multi-MeV energy range with reasonably high fluxes and compact footprints. Besides industrial, nuclear physics and security applications, a considerable interest comes from the possibility to assess the state of conservation of cultural assets like statues, columns etc., via visualization and analysis techniques using high energy photon beams. Computed Tomography scans, widely adopted in medicine at lower photon energies, presently provide high quality three-dimensional imaging in industry and museums. We explore the feasibility of a compact source of quasi-monochromatic, multi-MeV gamma-rays based on Inverse Compton Scattering (ICS) from a high intensity ultra-violet (UV) beam generated in a free-electron laser by the electron beam itself. This scheme introduces a stronger relationship between the energy of the scattered photons and that of the electron beam, resulting in a device much more compact than a classic ICS for a given scattered energy. The same electron beam is used to produce gamma-rays in the 10–20 MeV range and UV radiation in the 10–15 eV range, in a ~4×22 m{sup 2} footprint system.
Energy Technology Data Exchange (ETDEWEB)
Mihalcea, D.; Murokh, A.; Piot, P.; Ruan, J.
2017-07-01
A high-brilliance (~10^{22} photon s^{-1} mm^{-2} mrad^{-2} /0.1%) gamma-ray source experiment is currently being planned at Fermilab (E_{γ}≃1.1 MeV). The source implements a high-repetition-rate inverse Compton scattering by colliding electron bunches formed in a ~300-MeV superconducting linac with a high-intensity laser pulse. This paper describes the design rationale along with some of technical challenges associated to producing high-repetition-rate collision. The expected performances of the gamma-ray source are also presented.
Inverse Compton gamma-rays from pulsars
International Nuclear Information System (INIS)
Morini, M.
1983-01-01
A model is proposed for pulsar optical and gamma-ray emission where relativistic electrons beams: (i) scatter the blackbody photons from the polar cap surface giving inverse Compton gamma-rays and (ii) produce synchrotron optical photons in the light cylinder region which are then inverse Compton scattered giving other gamma-rays. The model is applied to the Vela pulsar, explaining the first gamma-ray pulse by inverse Compton scattering of synchrotron photons near the light cylinder and the second gamma-ray pulse partly by inverse Compton scattering of synchrotron photons and partly by inverse Compton scattering of the thermal blackbody photons near the star surface. (author)
Inverse Compton gamma-ray source for nuclear physics and related applications at the Duke FEL
International Nuclear Information System (INIS)
O'Shea, P.G.; Litvinenko, V.N.; Madey, J.M.J.
1995-01-01
In recent years the development of intense, short-wavelength FEL light sources has opened opportunities for the development new applications of high-energy Compton-backscattered photons. These applications range from medical imaging with X-ray photons to high-energy physics with γγ colliders. In this paper we discuss the possibilities for nuclear physics studies using polarized Compton backscattered γ-rays from the Duke storage-ring-driven UV-FEL. There are currently a number of projects that produce polarized γ-rays for nuclear physics studies. All of these facilities operate by scattering conventional laser-light against electrons circulating in a storage ring. In our scheme, intra-cavity scattering of the UV-FEL light will produce a γ-flux enhancement of approximately 10 3 over existing sources. The Duke ring can operate at energies up to 1.2 GeV and can produce FEL photons up to 12.5 eV. We plan to generate γ-rays up to 200 MeV in energy with an average flux in excess of 10 7 /s/MeV, using a modest scattering beam of 10-mA average stored current. The γ-ray energy may be tuned by varying the FEL wavelength or by adjusting the stored electron beam energy. Because of the intense flux, we can eliminate the need for photon energy tagging by collimating of the γ-ray beam. We will discuss the characteristics of the device and its research opportunities
Signature of inverse Compton emission from blazars
Gaur, Haritma; Mohan, Prashanth; Wierzcholska, Alicja; Gu, Minfeng
2018-01-01
Blazars are classified into high-, intermediate- and low-energy-peaked sources based on the location of their synchrotron peak. This lies in infra-red/optical to ultra-violet bands for low- and intermediate-peaked blazars. The transition from synchrotron to inverse Compton emission falls in the X-ray bands for such sources. We present the spectral and timing analysis of 14 low- and intermediate-energy-peaked blazars observed with XMM-Newton spanning 31 epochs. Parametric fits to X-ray spectra help constrain the possible location of transition from the high-energy end of the synchrotron to the low-energy end of the inverse Compton emission. In seven sources in our sample, we infer such a transition and constrain the break energy in the range 0.6-10 keV. The Lomb-Scargle periodogram is used to estimate the power spectral density (PSD) shape. It is well described by a power law in a majority of light curves, the index being flatter compared to general expectation from active galactic nuclei, ranging here between 0.01 and 1.12, possibly due to short observation durations resulting in an absence of long-term trends. A toy model involving synchrotron self-Compton and external Compton (EC; disc, broad line region, torus) mechanisms are used to estimate magnetic field strength ≤0.03-0.88 G in sources displaying the energy break and infer a prominent EC contribution. The time-scale for variability being shorter than synchrotron cooling implies steeper PSD slopes which are inferred in these sources.
Energy Technology Data Exchange (ETDEWEB)
Chaleil, A.; Le Flanchec, V.; Binet, A.; Nègre, J.P.; Devaux, J.F.; Jacob, V.; Millerioux, M.; Bayle, A.; Balleyguier, P. [CEA DAM DIF, F-91297 Arpajon (France); Prazeres, R. [CLIO/LCP, Bâtiment 201, Université Paris-Sud, F-91450 Orsay (France)
2016-12-21
An inverse Compton scattering source is under development at the ELSA linac of CEA, Bruyères-le-Châtel. Ultra-short X-ray pulses are produced by inverse Compton scattering of 30 ps-laser pulses by relativistic electron bunches. The source will be able to operate in single shot mode as well as in recurrent mode with 72.2 MHz pulse trains. Within this framework, an optical multipass system that multiplies the number of emitted X-ray photons in both regimes has been designed in 2014, then implemented and tested on ELSA facility in the course of 2015. The device is described from both geometrical and timing viewpoints. It is based on the idea of folding the laser optical path to pile-up laser pulses at the interaction point, thus increasing the interaction probability. The X-ray output gain measurements obtained using this system are presented and compared with calculated expectations.
Design of a 4.8-m ring for inverse Compton scattering x-ray source
Directory of Open Access Journals (Sweden)
H. S. Xu
2014-07-01
Full Text Available In this paper we present the design of a 50 MeV compact electron storage ring with 4.8-meter circumference for the Tsinghua Thomson scattering x-ray source. The ring consists of four dipole magnets with properly adjusted bending radii and edge angles for both horizontal and vertical focusing, and a pair of quadrupole magnets used to adjust the horizontal damping partition number. We find that the dynamic aperture of compact storage rings depends essentially on the intrinsic nonlinearity of the dipole magnets with small bending radius. Hamiltonian dynamics is found to agree well with results from numerical particle tracking. We develop a self-consistent method to estimate the equilibrium beam parameters in the presence of the intrabeam scattering, synchrotron radiation damping, quantum excitation, and residual gas scattering. We also optimize the rf parameters for achieving a maximum x-ray flux.
Constraint on Parameters of Inverse Compton Scattering Model for ...
Indian Academy of Sciences (India)
B2319+60, two parameters of inverse Compton scattering model, the initial Lorentz factor and the factor of energy loss of relativistic particles are constrained. Key words. Pulsar—inverse Compton scattering—emission mechanism. 1. Introduction. Among various kinds of models for pulsar radio emission, the inverse ...
Time-independent inverse compton spectrum for photons from a ...
African Journals Online (AJOL)
The general theoretical aspects of inverse Compton scattering was investigated and an equation for the timeindependent inverse Compton spectrum for photons from a plasma cloud of finite extent was derived. This was done by convolving the Kompaneets equation used for describing the evolution of the photon spectrum ...
Detection of inverse Compton scattering in plasma wakefield experiments
Energy Technology Data Exchange (ETDEWEB)
Bohlen, Simon
2016-12-15
Inverse Compton scattering (ICS) is the process of scattering of photons and electrons, where the photons gain a part of the electrons energy. In combination with plasma wakefield acceleration (PWA), ICS offers a compact MeV γ-ray source. A numerical study of ICS radiation produced in PWA experiments at FLASHForward was performed, using an ICS simulation code and the results from particle-in-cell modelling. The possibility of determining electron beam properties from measurements of the γ-ray source was explored for a wide range of experimental conditions. It was found that information about the electron divergence, the electron spectrum and longitudinal information can be obtained from measurements of the ICS beams for some cases. For the measurement of the ICS profile at FLASHForward, a CsI(Tl) scintillator array was chosen, similar to scintillators used in other ICS experiments. To find a suitable detector for spectrum measurements, an experimental test of a Compton spectrometer at the RAL was conducted. This test showed that a similar spectrometer could also be used at FLASHForward. However, changes to the spectrometer could be needed in order to use the pair production effect. In addition, further studies using Geant4 could lead to a better reconstruction of the obtained data. The studies presented here show that ICS is a promising method to analyse electron parameters from PWA experiments in further detail.
Inverse compton emission of gamma rays near the pulsar surface
International Nuclear Information System (INIS)
Morini, M.
1981-01-01
The physical conditions near pulsar surface that might give rise to gamma ray emission from Crab and Vela pulsars are not yet well understood. Here I suggest that, in the context of the vacuum discharge mechanism proposed by Ruderman and Sutherland (1975), gamma rays are produced by inverse Compton scattering of secondary electrons with the thermal radiation of the star surface as well as for curvature and synchotron radiation. It is found that inverse Compton scattering is relevant if the neutron star surface temperature is greater than 10 6 K or of the polar cap temperature is of the order of 5 x 10 6 K. Inverse Compton scattering in anisotropic photon fields and Klein-Nishina regime is here carefully considered. (orig.)
Pulsar high energy emission due to inverse Compton scattering
Energy Technology Data Exchange (ETDEWEB)
Lyutikov, Maxim
2013-06-15
We discuss growing evidence that pulsar high energy is emission is generated via Inverse Compton mechanism. We reproduce the broadband spectrum of Crab pulsar, from UV to very high energy gamma-rays - nearly ten decades in energy, within the framework of the cyclotron-self-Compton model. Emission is produced by two counter-streaming beams within the outer gaps, at distances above ∼ 20 NS radii. The outward moving beam produces UV-X-ray photons via Doppler-booster cyclotron emission, and GeV photons by Compton scattering the cyclotron photons produced by the inward going beam. The scattering occurs in the deep Klein-Nishina regime, whereby the IC component provides a direct measurement of particle distribution within the magnetosphere. The required plasma multiplicity is high, ∼10{sup 6} – 10{sup 7}, but is consistent with the average particle flux injected into the pulsar wind nebula.
High-Energy Compton Scattering Light Sources
Hartemann, Fred V; Barty, C; Crane, John; Gibson, David J; Hartouni, E P; Tremaine, Aaron M
2005-01-01
No monochromatic, high-brightness, tunable light sources currently exist above 100 keV. Important applications that would benefit from such new hard x-ray sources include: nuclear resonance fluorescence spectroscopy, time-resolved positron annihilation spectroscopy, and MeV flash radiography. The peak brightness of Compton scattering light sources is derived for head-on collisions and found to scale with the electron beam brightness and the drive laser pulse energy. This gamma 2
Advanced Source Deconvolution Methods for Compton Telescopes
Zoglauer, Andreas
The next generation of space telescopes utilizing Compton scattering for astrophysical observations is destined to one day unravel the mysteries behind Galactic nucleosynthesis, to determine the origin of the positron annihilation excess near the Galactic center, and to uncover the hidden emission mechanisms behind gamma-ray bursts. Besides astrophysics, Compton telescopes are establishing themselves in heliophysics, planetary sciences, medical imaging, accelerator physics, and environmental monitoring. Since the COMPTEL days, great advances in the achievable energy and position resolution were possible, creating an extremely vast, but also extremely sparsely sampled data space. Unfortunately, the optimum way to analyze the data from the next generation of Compton telescopes has not yet been found, which can retrieve all source parameters (location, spectrum, polarization, flux) and achieves the best possible resolution and sensitivity at the same time. This is especially important for all sciences objectives looking at the inner Galaxy: the large amount of expected sources, the high background (internal and Galactic diffuse emission), and the limited angular resolution, make it the most taxing case for data analysis. In general, two key challenges exist: First, what are the best data space representations to answer the specific science questions? Second, what is the best way to deconvolve the data to fully retrieve the source parameters? For modern Compton telescopes, the existing data space representations can either correctly reconstruct the absolute flux (binned mode) or achieve the best possible resolution (list-mode), both together were not possible up to now. Here we propose to develop a two-stage hybrid reconstruction method which combines the best aspects of both. Using a proof-of-concept implementation we can for the first time show that it is possible to alternate during each deconvolution step between a binned-mode approach to get the flux right and a
Resonant Inverse Compton Scattering Spectra from Highly Magnetized Neutron Stars
Wadiasingh, Zorawar; Baring, Matthew G.; Gonthier, Peter L.; Harding, Alice K.
2018-02-01
Hard, nonthermal, persistent pulsed X-ray emission extending between 10 and ∼150 keV has been observed in nearly 10 magnetars. For inner-magnetospheric models of such emission, resonant inverse Compton scattering of soft thermal photons by ultrarelativistic charges is the most efficient production mechanism. We present angle-dependent upscattering spectra and pulsed intensity maps for uncooled, relativistic electrons injected in inner regions of magnetar magnetospheres, calculated using collisional integrals over field loops. Our computations employ a new formulation of the QED Compton scattering cross section in strong magnetic fields that is physically correct for treating important spin-dependent effects in the cyclotron resonance, thereby producing correct photon spectra. The spectral cutoff energies are sensitive to the choices of observer viewing geometry, electron Lorentz factor, and scattering kinematics. We find that electrons with energies ≲15 MeV will emit most of their radiation below 250 keV, consistent with inferred turnovers for magnetar hard X-ray tails. More energetic electrons still emit mostly below 1 MeV, except for viewing perspectives sampling field-line tangents. Pulse profiles may be singly or doubly peaked dependent on viewing geometry, emission locale, and observed energy band. Magnetic pair production and photon splitting will attenuate spectra to hard X-ray energies, suppressing signals in the Fermi-LAT band. The resonant Compton spectra are strongly polarized, suggesting that hard X-ray polarimetry instruments such as X-Calibur, or a future Compton telescope, can prove central to constraining model geometry and physics.
Beam dynamics in Compton ring gamma sources
Directory of Open Access Journals (Sweden)
Eugene Bulyak
2006-09-01
Full Text Available Electron storage rings of GeV energy with laser pulse stacking cavities are promising intense sources of polarized hard photons which, via pair production, can be used to generate polarized positron beams. In this paper, the dynamics of electron bunches circulating in a storage ring and interacting with high-power laser pulses is studied both analytically and by simulation. Both the common features and the differences in the behavior of bunches interacting with an extremely high power laser pulse and with a moderate pulse are discussed. Also considerations on particular lattice designs for Compton gamma rings are presented.
BOW TIES IN THE SKY. I. THE ANGULAR STRUCTURE OF INVERSE COMPTON GAMMA-RAY HALOS IN THE FERMI SKY
Energy Technology Data Exchange (ETDEWEB)
Broderick, Avery E.; Shalaby, Mohamad [Department of Physics and Astronomy, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1 (Canada); Tiede, Paul [Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, ON, N2L 2Y5 (Canada); Pfrommer, Christoph [Heidelberg Institute for Theoretical Studies, Schloss-Wolfsbrunnenweg 35, D-69118 Heidelberg (Germany); Puchwein, Ewald [Institute of Astronomy and Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge, CB3 0HA (United Kingdom); Chang, Philip [Department of Physics, University of Wisconsin-Milwaukee, 1900 E. Kenwood Boulevard, Milwaukee, WI 53211 (United States); Lamberts, Astrid [Theoretical Astrophysics, California Institute of Technology, Pasadena, CA 91125 (United States)
2016-12-01
Extended inverse Compton halos are generally anticipated around extragalactic sources of gamma rays with energies above 100 GeV. These result from inverse Compton scattered cosmic microwave background photons by a population of high-energy electron/positron pairs produced by the annihilation of the high-energy gamma rays on the infrared background. Despite the observed attenuation of the high-energy gamma rays, the halo emission has yet to be directly detected. Here, we demonstrate that in most cases these halos are expected to be highly anisotropic, distributing the upscattered gamma rays along axes defined either by the radio jets of the sources or oriented perpendicular to a global magnetic field. We present a pedagogical derivation of the angular structure in the inverse Compton halo and provide an analytic formalism that facilitates the generation of mock images. We discuss exploiting this fact for the purpose of detecting gamma-ray halos in a set of companion papers.
Compton backscattered collmated X-ray source
Ruth, Ronald D.; Huang, Zhirong
2000-01-01
A high-intensity, inexpensive and collimated x-ray source for applications such as x-ray lithography is disclosed. An intense pulse from a high power laser, stored in a high-finesse resonator, repetitively collides nearly head-on with and Compton backscatters off a bunched electron beam, having relatively low energy and circulating in a compact storage ring. Both the laser and the electron beams are tightly focused and matched at the interaction region inside the optical resonator. The laser-electron interaction not only gives rise to x-rays at the desired wavelength, but also cools and stabilizes the electrons against intrabeam scattering and Coulomb repulsion with each other in the storage ring. This cooling provides a compact, intense bunch of electrons suitable for many applications. In particular, a sufficient amount of x-rays can be generated by this device to make it an excellent and flexible Compton backscattered x-ray (CBX) source for high throughput x-ray lithography and many other applications.
Compton backscattered collimated x-ray source
Ruth, R.D.; Huang, Z.
1998-10-20
A high-intensity, inexpensive and collimated x-ray source is disclosed for applications such as x-ray lithography is disclosed. An intense pulse from a high power laser, stored in a high-finesse resonator, repetitively collides nearly head-on with and Compton backscatters off a bunched electron beam, having relatively low energy and circulating in a compact storage ring. Both the laser and the electron beams are tightly focused and matched at the interaction region inside the optical resonator. The laser-electron interaction not only gives rise to x-rays at the desired wavelength, but also cools and stabilizes the electrons against intrabeam scattering and Coulomb repulsion with each other in the storage ring. This cooling provides a compact, intense bunch of electrons suitable for many applications. In particular, a sufficient amount of x-rays can be generated by this device to make it an excellent and flexible Compton backscattered x-ray (CBX) source for high throughput x-ray lithography and many other applications. 4 figs.
Nuclear photon science with inverse compton photon beam
International Nuclear Information System (INIS)
Fujiwara, Mamoru
2007-01-01
Recent developments of the synchrotron radiation facilities and intense lasers are now guiding us to a new research frontier with probes of a high energy GeV photon beam and an intense and short pulse MeV γ-ray beam. New directions of the science developments with photo-nuclear reactions are discussed. The inverse Compton γ-ray has two good advantages for searching for a microscopic quantum world; they are 1) good emittance and 2) high linear and circular polarizations. With these advantages, photon beams in the energy range from MeV to GeV are used for studying hadron structure, nuclear structure, astrophysics, materials science, as well as for applying medical science. (author)
Production of X-rays by inverse Compton effect
International Nuclear Information System (INIS)
Mainardi, R.T.
2005-01-01
X-rays and gamma rays of high energy values can be produced by the scattering of low energy photons with high energy electrons, being this a process controlled by the Compton scattering. If a laser beam is used, the x-ray beam inherits the properties of intensity, monochromaticity and collimation from the laser. In this work we analyze the generation of intense x-ray beams of energies between 10 and 100 KeV to be used in a wide range of applications where a high intensity and high degrees of monochromaticity and polarization are important properties to improve image reduce doses and improve radiation treatments. To this purpose we evaluated, using relativistic kinematics the scattered beam properties in terms of the scattering angle. This arrangement is being considered in several worldwide laboratories as an alternative to synchrotron radiation and is referred to as 'table top synchrotron radiation', since it cost of installation is orders of magnitude smaller than a 'synchrotron radiation source'. The radiation beam might exhibit non-linear properties in its interaction with matter, in a similar way as a laser beam and we will investigate how to calibrate and evaluate TLD dosemeters properties, both in low and high intensity fields either mono or polyenergetic in wide spectral energy ranges. (Author)
High-repetition intra-cavity source of Compton radiation
International Nuclear Information System (INIS)
Pogorelsky, I; Polyanskiy, M; Agustsson, R; Campese, T; Murokh, A; Ovodenko, A; Shaftan, T
2014-01-01
We report our progress in developing a high-power Compton source for a diversity of applications ranging from university-scale compact x-ray light sources and metrology tools for EUV lithography, to high-brilliance gamma-sources for nuclear analysis. Our conceptual approach lies in multiplying the source’s repetition rate and increasing its average brightness by placing the Compton interaction point inside the optical cavity of an active laser. We discuss considerations in its design, our simulations, and tests of the laser’s cavity that confirm the feasibility of the proposed concept. (paper)
Beam dynamics simulation in the X-ray Compton source
Energy Technology Data Exchange (ETDEWEB)
Gladkikh, P.; Karnaukhov, I.; Telegin, Yu.; Shcherbakov, A. E-mail: shcherbakov@kipt.kharkov.ua; Zelinsky, A
2002-05-01
At the National Science Center 'Kharkov Institute of Physics and Technology' the X-ray source based on Compton scattering has been developed. The computer code for simulation of electron beam dynamics with taking into account the Compton scattering effect based on Monte Carlo method is described in this report. The first results of computer simulation of beam dynamics with electron-photon interaction, parameters of electron and photon beams are presented. Calculations were carried out with the lattice of synchrotron light source SRS-800 Ukrainian Synchrotron Center.
Beam dynamics simulation in the X-ray Compton source
International Nuclear Information System (INIS)
Gladkikh, P.; Karnaukhov, I.; Telegin, Yu.; Shcherbakov, A.; Zelinsky, A.
2002-01-01
At the National Science Center 'Kharkov Institute of Physics and Technology' the X-ray source based on Compton scattering has been developed. The computer code for simulation of electron beam dynamics with taking into account the Compton scattering effect based on Monte Carlo method is described in this report. The first results of computer simulation of beam dynamics with electron-photon interaction, parameters of electron and photon beams are presented. Calculations were carried out with the lattice of synchrotron light source SRS-800 Ukrainian Synchrotron Center
Beam dynamics simulation in the X-ray Compton source
Gladkikh, P; Telegin, Yu P; Shcherbakov, A; Zelinsky, A
2002-01-01
At the National Science Center 'Kharkov Institute of Physics and Technology' the X-ray source based on Compton scattering has been developed. The computer code for simulation of electron beam dynamics with taking into account the Compton scattering effect based on Monte Carlo method is described in this report. The first results of computer simulation of beam dynamics with electron-photon interaction, parameters of electron and photon beams are presented. Calculations were carried out with the lattice of synchrotron light source SRS-800 Ukrainian Synchrotron Center.
Testing earthquake source inversion methodologies
Page, Morgan T.; Mai, Paul Martin; Schorlemmer, Danijel
2011-01-01
Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data
Directional Unfolded Source Term (DUST) for Compton Cameras.
Energy Technology Data Exchange (ETDEWEB)
Mitchell, Dean J.; Mitchell, Dean J.; Horne, Steven M.; O' Brien, Sean; Thoreson, Gregory G
2018-03-01
A Directional Unfolded Source Term (DUST) algorithm was developed to enable improved spectral analysis capabilities using data collected by Compton cameras. Achieving this objective required modification of the detector response function in the Gamma Detector Response and Analysis Software (GADRAS). Experimental data that were collected in support of this work include measurements of calibration sources at a range of separation distances and cylindrical depleted uranium castings.
Simulation of inverse Compton scattering and its implications on the scattered linewidth
Ranjan, N.; Terzić, B.; Krafft, G. A.; Petrillo, V.; Drebot, I.; Serafini, L.
2018-03-01
Rising interest in inverse Compton sources has increased the need for efficient models that properly quantify the behavior of scattered radiation given a set of interaction parameters. The current state-of-the-art simulations rely on Monte Carlo-based methods, which, while properly expressing scattering behavior in high-probability regions of the produced spectra, may not correctly simulate such behavior in low-probability regions (e.g. tails of spectra). Moreover, sampling may take an inordinate amount of time for the desired accuracy to be achieved. In this paper, we present an analytic derivation of the expression describing the scattered radiation linewidth and propose a model to describe the effects of horizontal and vertical emittance on the properties of the scattered radiation. We also present an improved version of the code initially reported in Krafft et al. [Phys. Rev. Accel. Beams 19, 121302 (2016), 10.1103/PhysRevAccelBeams.19.121302], that can perform the same simulations as those present in cain and give accurate results in low-probability regions by integrating over the emissions of the electrons. Finally, we use these codes to carry out simulations that closely verify the behavior predicted by the analytically derived scaling law.
Sources of the X-rays Based on Compton Scattering
International Nuclear Information System (INIS)
Androsov, V.; Bulyak, E.; Gladkikh, P.; Karnaukhov, I.; Mytsykov, A.; Telegin, Yu.; Shcherbakov, A.; Zelinsky, A.
2007-01-01
The principles of the intense X-rays generation by laser beam scattering on a relativistic electron beam are described and description of facilities assigned to produce the X-rays based on Compton scattering is presented. The possibilities of various types of such facilities are estimated and discussed. The source of the X-rays based on a storage ring with low beam energy is described in details and advantages of the sources of such type are discussed.The results of calculation and numerical simulation carried out for laser electron storage ring NESTOR that is under development in NSC KIPT show wide prospects of the accelerator facility of such type
Development of Compton gamma-ray sources at LLNL
Energy Technology Data Exchange (ETDEWEB)
Albert, F.; Anderson, S. G.; Ebbers, C. A.; Gibson, D. J.; Hartemann, F. V.; Marsh, R. A.; Messerly, M. J.; Prantil, M. A.; Wu, S.; Barty, C. P. J. [Lawrence Livermore National Laboratory, NIF and Photon Science, 7000 East avenue, Livermore, CA 94550 (United States)
2012-12-21
Compact Compton scattering gamma-ray sources offer the potential of studying nuclear photonics with new tools. The optimization of such sources depends on the final application, but generally requires maximizing the spectral density (photons/eV) of the gamma-ray beam while simultaneously reducing the overall bandwidth on target to minimize noise. We have developed an advanced design for one such system, comprising the RF drive, photoinjector, accelerator, and electron-generating and electron-scattering laser systems. This system uses a 120 Hz, 250 pC, 2 ps, 0.35 mm mrad electron beam with 250 MeV maximum energy in an X-band accelerator scattering off a 150 mJ, 10 ps, 532 nm laser to generate 5 Multiplication-Sign 10{sup 10} photons/eV/s/Sr at 0.5 MeV with an overall bandwidth of less than 1%. The source will be able to produce photons up to energies of 2.5 MeV. We also discuss Compton scattering gamma-ray source predictions given by numerical codes.
HIGH-ENERGY EMISSION OF GRB 130427A: EVIDENCE FOR INVERSE COMPTON RADIATION
International Nuclear Information System (INIS)
Fan, Yi-Zhong; Zhang, Fu-Wen; He, Hao-Ning; Zhou, Bei; Yang, Rui-Zhi; Jin, Zhi-Ping; Wei, Da-Ming; Tam, P. H. T.; Liang, Yun-Feng
2013-01-01
A nearby superluminous burst GRB 130427A was simultaneously detected by six γ-ray space telescopes (Swift, the Fermi GLAST Burst Monitor (GBM)/Large Area Telescope, Konus-Wind, SPI-ACS/INTEGRAL, AGILE, and RHESSI) and by three RAPTOR full-sky persistent monitors. The isotropic γ-ray energy release is ∼10 54 erg, rendering it the most powerful explosion among gamma-ray bursts (GRBs) with a redshift z ≤ 0.5. The emission above 100 MeV lasted about one day, and four photons are at energies greater than 40 GeV. We show that the count rate of 100 MeV-100 GeV emission may be mainly accounted for by the forward shock synchrotron radiation and the inverse Compton radiation likely dominates at GeV-TeV energies. In particular, an inverse Compton radiation origin is favored for the ∼(95.3, 47.3, 41.4, 38.5, 32) GeV photons arriving at t ∼ (243, 256.3, 610.6, 3409.8, 34366.2) s after the trigger of Fermi-GBM. Interestingly, the external inverse Compton scattering of the prompt emission (the second episode, i.e., t ∼ 120-260 s) by the forward-shock-accelerated electrons is expected to produce a few γ-rays at energies above 10 GeV, while five were detected in the same time interval. A possible unified model for the prompt soft γ-ray, optical, and GeV emission of GRB 130427A, GRB 080319B, and GRB 090902B is outlined. Implications of the null detection of >1 TeV neutrinos from GRB 130427A by IceCube are discussed
INVERSE COMPTON X-RAY EMISSION FROM SUPERNOVAE WITH COMPACT PROGENITORS: APPLICATION TO SN2011fe
International Nuclear Information System (INIS)
Margutti, R.; Soderberg, A. M.; Chomiuk, L.; Milisavljevic, D.; Foley, R. J.; Slane, P.; Moe, M.; Chevalier, R.; Hurley, K.; Hughes, J. P.; Fransson, C.; Barthelmy, S.; Cummings, J.; Boynton, W.; Enos, H.; Fellows, C.; Briggs, M.; Connaughton, V.; Costa, E.; Del Monte, E.
2012-01-01
We present a generalized analytic formalism for the inverse Compton X-ray emission from hydrogen-poor supernovae and apply this framework to SN 2011fe using Swift X-Ray Telescope (XRT), UVOT, and Chandra observations. We characterize the optical properties of SN 2011fe in the Swift bands and find them to be broadly consistent with a 'normal' SN Ia, however, no X-ray source is detected by either XRT or Chandra. We constrain the progenitor system mass-loss rate M-dot -9 M ☉ yr -1 (3σ c.l.) for wind velocity v w = 100 km s –1 . Our result rules out symbiotic binary progenitors for SN 2011fe and argues against Roche lobe overflowing subgiants and main-sequence secondary stars if ∼> 1% of the transferred mass is lost at the Lagrangian points. Regardless of the density profile, the X-ray non-detections are suggestive of a clean environment (n CSM –3 ) for 2 × 10 15 ∼ 16 cm around the progenitor site. This is either consistent with the bulk of material being confined within the binary system or with a significant delay between mass loss and supernova explosion. We furthermore combine X-ray and radio limits from Chomiuk et al. to constrain the post-shock energy density in magnetic fields. Finally, we searched for the shock breakout pulse using gamma-ray observations from the Interplanetary Network and find no compelling evidence for a supernova-associated burst. Based on the compact radius of the progenitor star we estimate that the shock breakout pulse was likely not detectable by current satellites.
Testing earthquake source inversion methodologies
Page, Morgan T.
2011-01-01
Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.
Production of X-rays by inverse Compton effect; Produccion de rayos X por efecto Compton inverso
Energy Technology Data Exchange (ETDEWEB)
Mainardi, R.T. [Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, 5000 Cordoba (Argentina)
2005-07-01
X-rays and gamma rays of high energy values can be produced by the scattering of low energy photons with high energy electrons, being this a process controlled by the Compton scattering. If a laser beam is used, the x-ray beam inherits the properties of intensity, monochromaticity and collimation from the laser. In this work we analyze the generation of intense x-ray beams of energies between 10 and 100 KeV to be used in a wide range of applications where a high intensity and high degrees of monochromaticity and polarization are important properties to improve image reduce doses and improve radiation treatments. To this purpose we evaluated, using relativistic kinematics the scattered beam properties in terms of the scattering angle. This arrangement is being considered in several worldwide laboratories as an alternative to synchrotron radiation and is referred to as 'table top synchrotron radiation', since it cost of installation is orders of magnitude smaller than a 'synchrotron radiation source'. The radiation beam might exhibit non-linear properties in its interaction with matter, in a similar way as a laser beam and we will investigate how to calibrate and evaluate TLD dosemeters properties, both in low and high intensity fields either mono or polyenergetic in wide spectral energy ranges. (Author)
Inverse Compton gamma-rays from galactic dark matter annihilation. Anisotropy signatures
International Nuclear Information System (INIS)
Zhang, Le; Sigl, Guenter; Miniati, Francesco
2010-08-01
High energy electrons and positrons from annihilating dark matter can imprint unique angular anisotropies on the diffuse gamma-ray flux by inverse Compton scattering off the interstellar radiation field. We develop a numerical tool to compute gamma-ray emission from such electrons and positrons diffusing in the smooth host halo and in substructure halos with masses down to 10 -6 M s un. We show that, unlike the total gamma-ray angular power spectrum observed by Fermi-LAT, the angular power spectrum from inverse Compton scattering is exponentially suppressed below an angular scale determined by the diffusion length of electrons and positrons. For TeV scale dark matter with a canonical thermal freeze-out cross section 3 x 10 -26 cm 3 /s, this feature may be detectable by Fermi-LAT in the energy range 100-300 GeV after more sophisticated foreground subtraction. We also find that the total flux and the shape of the angular power spectrum depends sensitively on the spatial distribution of subhalos in the Milky Way. Finally, the contribution from the smooth host halo component to the gamma-ray mean intensity is negligibly small compared to subhalos. (orig.)
Inverse Compton gamma-rays from galactic dark matter annihilation. Anisotropy signatures
Energy Technology Data Exchange (ETDEWEB)
Zhang, Le; Sigl, Guenter [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Miniati, Francesco [ETH Zuerich (Switzerland). Physics Dept.
2010-08-15
High energy electrons and positrons from annihilating dark matter can imprint unique angular anisotropies on the diffuse gamma-ray flux by inverse Compton scattering off the interstellar radiation field. We develop a numerical tool to compute gamma-ray emission from such electrons and positrons diffusing in the smooth host halo and in substructure halos with masses down to 10{sup -6}M{sub s}un. We show that, unlike the total gamma-ray angular power spectrum observed by Fermi-LAT, the angular power spectrum from inverse Compton scattering is exponentially suppressed below an angular scale determined by the diffusion length of electrons and positrons. For TeV scale dark matter with a canonical thermal freeze-out cross section 3 x 10{sup -26} cm{sup 3}/s, this feature may be detectable by Fermi-LAT in the energy range 100-300 GeV after more sophisticated foreground subtraction. We also find that the total flux and the shape of the angular power spectrum depends sensitively on the spatial distribution of subhalos in the Milky Way. Finally, the contribution from the smooth host halo component to the gamma-ray mean intensity is negligibly small compared to subhalos. (orig.)
International Nuclear Information System (INIS)
Masakazu Washio; Kazuyuki Sakaue; Yoshimasa Hama; Yoshio Kamiya; Tomoko Gowa; Akihiko Masuda; Aki Murata; Ryo Moriyama; Shigeru Kashiwagi; Junji Urakawa
2007-01-01
High quality beam generation project based on High-Tech Research Center Project, which has been approved by Ministry of Education, Culture, Sports, Science and Technology in 1999, has been conducted by advance research institute for science and engineering, Waseda University. In the project, laser photo-cathode RF-gun has been selected for the high quality electron beam source. RF cavities with low dark current, which were made by diamond turning technique, have been successfully manufactured. The low emittance electron beam was realized by choosing the modified laser injection technique. The obtained normalized emmitance was about 3 m.mrad at 100 pC of electron charge. The soft x-ray beam generation with the energy of 370 eV, which is in the energy region of so-called water window, by inverse Compton scattering has been performed by the collision between IR laser and the low emmitance electron beams. (Author)
International Nuclear Information System (INIS)
Bottacini, E.; Schady, P.; Rau, A.; Zhang, X.-L.; Greiner, J.; Boettcher, M.; Ajello, M.; Fendt, C.
2010-01-01
1ES 1959+650 is one of the most remarkable high-peaked BL Lacertae objects (HBL). In 2002, it exhibited a TeV γ-ray flare without a similar brightening of the synchrotron component at lower energies. This orphan TeV flare remained a mystery. We present the results of a multifrequency campaign, triggered by the INTEGRAL IBIS detection of 1ES 1959+650. Our data range from the optical to hard X-ray energies, thus covering the synchrotron and inverse-Compton components simultaneously. We observed the source with INTEGRAL, the Swift X-Ray Telescope, and the UV-Optical Telescope, and nearly simultaneously with a ground-based optical telescope. The steep spectral component at X-ray energies is most likely due to synchrotron emission, while at soft γ-ray energies the hard spectral index may be interpreted as the onset of the high-energy component of the blazar spectral energy distribution (SED). This is the first clear measurement of a concave X-ray-soft γ-ray spectrum for an HBL. The SED can be well modeled with a leptonic synchrotron self-Compton model. When the SED is fitted this model requires a very hard electron spectral index of q ∼ 1.85, possibly indicating the relevance of second-order Fermi acceleration.
X-band RF Photoinjector for Laser Compton X-ray and Gamma-ray Sources
Energy Technology Data Exchange (ETDEWEB)
Marsh, R. A. [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); Anderson, G. G. [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); Anderson, S. G. [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); Gibson, D. J. [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); Barty, C. J. [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States)
2015-05-06
Extremely bright narrow bandwidth gamma-ray sources are expanding the application of accelerator technology and light sources in new directions. An X-band test station has been commissioned at LLNL to develop multi-bunch electron beams. This multi-bunch mode will have stringent requirements for the electron bunch properties including low emittance and energy spread, but across multiple bunches. The test station is a unique facility featuring a 200 MV/m 5.59 cell X-band photogun powered by a SLAC XL4 klystron driven by a Scandinova solid-state modulator. This paper focuses on its current status including the generation and initial characterization of first electron beam. Design and installation of the inverse-Compton scattering interaction region and upgrade paths will be discussed along with future applications.
Brilliant GeV gamma-ray flash from inverse Compton scattering in the QED regime
Gong, Z.; Hu, R. H.; Lu, H. Y.; Yu, J. Q.; Wang, D. H.; Fu, E. G.; Chen, C. E.; He, X. T.; Yan, X. Q.
2018-04-01
An all-optical scheme is proposed for studying laser plasma based incoherent photon emission from inverse Compton scattering in the quantum electrodynamic regime. A theoretical model is presented to explain the coupling effects among radiation reaction trapping, the self-generated magnetic field and the spiral attractor in phase space, which guarantees the transfer of energy and angular momentum from electromagnetic fields to particles. Taking advantage of a prospective ˜ 1023 W cm-2 laser facility, 3D particle-in-cell simulations show a gamma-ray flash with unprecedented multi-petawatt power and brightness of 1.7 × 1023 photons s-1 mm-2 mrad-2/0.1% bandwidth (at 1 GeV). These results bode well for new research directions in particle physics and laboratory astrophysics exploring laser plasma interactions.
Scaling laws in high-energy inverse compton scattering. II. Effect of bulk motions
International Nuclear Information System (INIS)
Nozawa, Satoshi; Kohyama, Yasuharu; Itoh, Naoki
2010-01-01
We study the inverse Compton scattering of the CMB photons off high-energy nonthermal electrons. We extend the formalism obtained by the previous paper to the case where the electrons have nonzero bulk motions with respect to the CMB frame. Assuming the power-law electron distribution, we find the same scaling law for the probability distribution function P 1,K (s) as P 1 (s) which corresponds to the zero bulk motions, where the peak height and peak position depend only on the power-index parameter. We solved the rate equation analytically. It is found that the spectral intensity function also has the same scaling law. The effect of the bulk motions to the spectral intensity function is found to be small. The present study will be applicable to the analysis of the x-ray and gamma-ray emission models from various astrophysical objects with nonzero bulk motions such as radio galaxies and astrophysical jets.
Inverse source problems in elastodynamics
Bao, Gang; Hu, Guanghui; Kian, Yavar; Yin, Tao
2018-04-01
We are concerned with time-dependent inverse source problems in elastodynamics. The source term is supposed to be the product of a spatial function and a temporal function with compact support. We present frequency-domain and time-domain approaches to show uniqueness in determining the spatial function from wave fields on a large sphere over a finite time interval. The stability estimate of the temporal function from the data of one receiver and the uniqueness result using partial boundary data are proved. Our arguments rely heavily on the use of the Fourier transform, which motivates inversion schemes that can be easily implemented. A Landweber iterative algorithm for recovering the spatial function and a non-iterative inversion scheme based on the uniqueness proof for recovering the temporal function are proposed. Numerical examples are demonstrated in both two and three dimensions.
A compact X-ray source based on Compton scattering
Energy Technology Data Exchange (ETDEWEB)
Bulyak, E.; Gladkikh, P.; Grigor' ev, Yu.; Guk, I.; Karnaukhov, I.; Khodyachikh, A.; Kononenko, S.; Mocheshnikov, N.; Mytsykov, A.; Shcherbakov, A. E-mail: shcherbakov@kipt.kharkov.ua; Tarasenko, A.; Telegin, Yu.; Zelinsky, A
2001-07-21
The main parameters of Kharkov electron storage ring N-100 with a beam energy range from 70 to 150 MeV are presented. The main results that were obtained in experimental researches are briefly described. The future of the N-100 upgrade to the development of the X-ray generator based on Compton back-scattering are presented. The electron beam energy range will be extended up to 250 MeV and the circumference of the storage ring will be 13.72 m. The lattice, parameters of the electron beam and the Compton back-scattering photons flux are described.
A compact X-ray source based on Compton scattering
International Nuclear Information System (INIS)
Bulyak, E.; Gladkikh, P.; Grigor'ev, Yu.; Guk, I.; Karnaukhov, I.; Khodyachikh, A.; Kononenko, S.; Mocheshnikov, N.; Mytsykov, A.; Shcherbakov, A.; Tarasenko, A.; Telegin, Yu.; Zelinsky, A.
2001-01-01
The main parameters of Kharkov electron storage ring N-100 with a beam energy range from 70 to 150 MeV are presented. The main results that were obtained in experimental researches are briefly described. The future of the N-100 upgrade to the development of the X-ray generator based on Compton back-scattering are presented. The electron beam energy range will be extended up to 250 MeV and the circumference of the storage ring will be 13.72 m. The lattice, parameters of the electron beam and the Compton back-scattering photons flux are described
Point-source inversion techniques
Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.
1982-11-01
A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.
Inverse free electron laser accelerator for advanced light sources
Directory of Open Access Journals (Sweden)
J. P. Duris
2012-06-01
Full Text Available We discuss the inverse free electron laser (IFEL scheme as a compact high gradient accelerator solution for driving advanced light sources such as a soft x-ray free electron laser amplifier or an inverse Compton scattering based gamma-ray source. In particular, we present a series of new developments aimed at improving the design of future IFEL accelerators. These include a new procedure to optimize the choice of the undulator tapering, a new concept for prebunching which greatly improves the fraction of trapped particles and the final energy spread, and a self-consistent study of beam loading effects which leads to an energy-efficient high laser-to-beam power conversion.
A simple algorithm for estimation of source-to-detector distance in Compton imaging
International Nuclear Information System (INIS)
Rawool-Sullivan, Mohini W.; Sullivan, John P.; Tornga, Shawn R.; Brumby, Steven P.
2008-01-01
Compton imaging is used to predict the location of gamma-emitting radiation sources. The X and Y coordinates of the source can be obtained using a back-projected image and a two-dimensional peak-finding algorithm. The emphasis of this work is to estimate the source-to-detector distance (Z). The algorithm presented uses the solid angle subtended by the reconstructed image at various source-to-detector distances. This algorithm was validated using both measured data from the prototype Compton imager (PCI) constructed at the Los Alamos National Laboratory and simulated data of the same imager. Results show this method can be applied successfully to estimate Z, and it provides a way of determining Z without prior knowledge of the source location. This method is faster than the methods that employ maximum likelihood method because it is based on simple back projections of Compton scatter data
Synchrotron self-inverse Compton radiation from reverse shock on GRB 120326A
Energy Technology Data Exchange (ETDEWEB)
Urata, Yuji [Institute of Astronomy, National Central University, Chung-Li 32054, Taiwan (China); Huang, Kuiyun; Takahashi, Satoko [Academia Sinica Institute of Astronomy and Astrophysics, Taipei 106, Taiwan (China); Im, Myungshin; Kim, Jae-Woo; Jang, Minsung [Center for the Exploration of the Origin of the Universe, Department of Physics and Astronomy, FPRD, Seoul National University, Shillim-dong, San 56-1, Kwanak-gu, Seoul (Korea, Republic of); Yamaoka, Kazutaka [Solar-Terrestrial Environment Laboratory, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi 464-8601 (Japan); Tashiro, Makoto [Department of Physics, Saitama University, Shimo-Okubo, Saitama 338-8570 (Japan); Pak, Soojong, E-mail: urata@astro.ncu.edu.tw [School of Space Research, Kyung Hee University, Yongin, Gyeonggi 446-701 (Korea, Republic of)
2014-07-10
We present multi-wavelength observations of a typical long duration GRB 120326A at z = 1.798, including rapid observations using a Submillimeter Array (SMA) and a comprehensive monitoring in the X-ray and optical. The SMA observation provided the fastest detection to date among seven submillimeter afterglows at 230 GHz. The prompt spectral analysis, using Swift and Suzaku, yielded a spectral peak energy of E{sub peak}{sup src}=107.8{sub −15.3}{sup +15.3} keV and an equivalent isotropic energy of E{sub iso} as 3.18{sub −0.32}{sup +0.40}×10{sup 52} erg. The temporal evolution and spectral properties in the optical were consistent with the standard forward shock synchrotron with jet collimation (6.°69 ± 0.°16). The forward shock modeling, using a two-dimensional relativistic hydrodynamic jet simulation, was also determined by the reasonable burst explosion and the synchrotron radiation parameters for the optical afterglow. The X-ray light curve showed no apparent jet break and the temporal decay index relation between the X-ray and optical (αo – α{sub X} = –1.45 ± 0.10) indicated different radiation processes in each of them. Introducing synchrotron self-inverse Compton radiation from reverse shock is a possible solution, and the detection and slow decay of the afterglow in submillimeter supports that this is a plausible idea. The observed temporal evolution and spectral properties, as well as forward shock modeling parameters, enabled us to determine reasonable functions to describe the afterglow properties. Because half of the events share similar properties in the X-ray and optical as the current event, GRB 120326A will be a benchmark with further rapid follow-ups, using submillimeter instruments such as an SMA and the Atacama Large Millimeter/submillimeter Array.
Synchrotron self-inverse Compton radiation from reverse shock on GRB 120326A
International Nuclear Information System (INIS)
Urata, Yuji; Huang, Kuiyun; Takahashi, Satoko; Im, Myungshin; Kim, Jae-Woo; Jang, Minsung; Yamaoka, Kazutaka; Tashiro, Makoto; Pak, Soojong
2014-01-01
We present multi-wavelength observations of a typical long duration GRB 120326A at z = 1.798, including rapid observations using a Submillimeter Array (SMA) and a comprehensive monitoring in the X-ray and optical. The SMA observation provided the fastest detection to date among seven submillimeter afterglows at 230 GHz. The prompt spectral analysis, using Swift and Suzaku, yielded a spectral peak energy of E peak src =107.8 −15.3 +15.3 keV and an equivalent isotropic energy of E iso as 3.18 −0.32 +0.40 ×10 52 erg. The temporal evolution and spectral properties in the optical were consistent with the standard forward shock synchrotron with jet collimation (6.°69 ± 0.°16). The forward shock modeling, using a two-dimensional relativistic hydrodynamic jet simulation, was also determined by the reasonable burst explosion and the synchrotron radiation parameters for the optical afterglow. The X-ray light curve showed no apparent jet break and the temporal decay index relation between the X-ray and optical (αo – α X = –1.45 ± 0.10) indicated different radiation processes in each of them. Introducing synchrotron self-inverse Compton radiation from reverse shock is a possible solution, and the detection and slow decay of the afterglow in submillimeter supports that this is a plausible idea. The observed temporal evolution and spectral properties, as well as forward shock modeling parameters, enabled us to determine reasonable functions to describe the afterglow properties. Because half of the events share similar properties in the X-ray and optical as the current event, GRB 120326A will be a benchmark with further rapid follow-ups, using submillimeter instruments such as an SMA and the Atacama Large Millimeter/submillimeter Array.
Gamma ray burst source locations with the Ulysses/Compton/PVO Network
International Nuclear Information System (INIS)
Cline, T.L.; Hurley, K.C.; Boer, M.; Sommer, M.; Niel, M.; Fishman, G.J.; Kouveliotou, C.; Meegan, C.A.; Paciesas, W.S.; Wilson, R.B.; Laros, J.G.; Klebesadel, R.W.
1991-01-01
The new interplanetary gamma-ray burst network will determine source fields with unprecedented accuracy. The baseline of the Ulysses mission and the locations of Pioneer-Venus Orbiter and of Mars Observer will ensure precision to a few tens of arc seconds. Combined with the event phenomenologies of the Burst and Transient Source Experiment on Compton Observatory, the source locations to be achieved with this network may provide a basic new understanding of the puzzle of gamma ray bursts
International Nuclear Information System (INIS)
Washio, M.; Sakaue, K.; Hama, Y.; Kamiya, Y.; Moriyama, R.; Hezume, K.; Saito, T.; Kuroda, R.; Kashiwagi, S.; Ushida, K.; Hayano, H.; Urakawa, J.
2006-01-01
High quality beam generation project based on High-Tech Research Center Project, which has been approved by Ministry of Education, Culture, Sports, Science and Technology in 1999, has been conducted by advance research institute for science and engineering, Waseda University. In the project, laser photo-cathode RF-gun has been selected for the high quality electron beam source. RF cavities with low dark current, which were made by diamond turning technique, have been successfully manufactured. The low emittance electron beam was realized by choosing the modified laser injection technique. The obtained normalized emittance was about 3 mm·mrad at 100 pC of electron charge. The soft X-ray beam generation with the energy of 370 eV, which is in the energy region of so-called 'water window', by inverse Compton scattering has been performed by the collision between IR laser and the low emittance electron beams. (authors)
International Nuclear Information System (INIS)
Moon, Sunghwan
2017-01-01
A Compton camera has been introduced for use in single photon emission computed tomography to improve the low efficiency of a conventional gamma camera. In general, a Compton camera brings about the conical Radon transform. Here we consider a conical Radon transform with the vertices on a rotation symmetric set with respect to a coordinate axis. We show that this conical Radon transform can be decomposed into two transforms: the spherical sectional transform and the weighted fan beam transform. After finding inversion formulas for these two transforms, we provide an inversion formula for the conical Radon transform. (paper)
Polarized γ source based on Compton backscattering in a laser cavity
Directory of Open Access Journals (Sweden)
V. Yakimenko
2006-09-01
Full Text Available We propose a novel gamma source suitable for generating a polarized positron beam for the next generation of electron-positron colliders, such as the International Linear Collider (ILC, and the Compact Linear Collider (CLIC. This 30-MeV polarized gamma source is based on Compton scattering inside a picosecond CO_{2} laser cavity generated from electron bunches produced by a 4-GeV linac. We identified and experimentally verified the optimum conditions for obtaining at least one gamma photon per electron. After multiplication at several consecutive interaction points, the circularly polarized gamma rays are stopped on a target, thereby creating copious numbers of polarized positrons. We address the practicality of having an intracavity Compton-polarized positron source as the injector for these new colliders.
A compact Compton backscatter X-ray source for mammography and coronary angiography
International Nuclear Information System (INIS)
Nguyen, D.C.; Kinross-Wright, J.M.; Weber, M.E.; Volz, S.K.; Gierman, S.M.; Hayes, K.; Vernon, W.; Goldstein, D.J.
1998-01-01
This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). The project objective is to generate a large flux of tunable, monochromatic x-rays for use in mammography and coronary angiography. The approach is based on Compton backscattering of an ultraviolet solid-state laser beam against the high-brightness 20-MeV electron beams from a compact linear accelerator. The direct Compton backscatter approach failed to produce a large flux of x-rays due to the low photon flux of the scattering solid-state laser. The authors have modified the design of a compact x-ray source to the new Compton backscattering geometry with use of a regenerative amplifier free-electron laser. They have successfully demonstrated the production of a large flux of infrared photons and a high-brightness electron beam focused in both dimensions for performing Compton backscattering in a regenerative amplifier geometry
Full traveltime inversion in source domain
Liu, Lu
2017-06-01
This paper presents a new method of source-domain full traveltime inversion (FTI). The objective of this study is automatically building near-surface velocity using the early arrivals of seismic data. This method can generate the inverted velocity that can kinetically best match the reconstructed plane-wave source of early arrivals with true source in source domain. It does not require picking first arrivals for tomography, which is one of the most challenging aspects of ray-based tomographic inversion. Besides, this method does not need estimate the source wavelet, which is a necessity for receiver-domain wave-equation velocity inversion. Furthermore, we applied our method on one synthetic dataset; the results show our method could generate a reasonable background velocity even when shingling first arrivals exist and could provide a good initial velocity for the conventional full waveform inversion (FWI).
Inverse source problems for eddy current equations
International Nuclear Information System (INIS)
Rodríguez, Ana Alonso; Valli, Alberto; Camaño, Jessika
2012-01-01
We study the inverse source problem for the eddy current approximation of Maxwell equations. As for the full system of Maxwell equations, we show that a volume current source cannot be uniquely identified by knowledge of the tangential components of the electromagnetic fields on the boundary, and we characterize the space of non-radiating sources. On the other hand, we prove that the inverse source problem has a unique solution if the source is supported on the boundary of a subdomain or if it is the sum of a finite number of dipoles. We address the applicability of this result for the localization of brain activity from electroencephalography and magnetoencephalography measurements. (paper)
Full traveltime inversion in source domain
Liu, Lu; Guo, Bowen; Luo, Yi
2017-01-01
This paper presents a new method of source-domain full traveltime inversion (FTI). The objective of this study is automatically building near-surface velocity using the early arrivals of seismic data. This method can generate the inverted velocity
Advanced Laser-Compton Gamma-Ray Sources for Nuclear Materials Detection, Assay and Imaging
Barty, C. P. J.
2015-10-01
Highly-collimated, polarized, mono-energetic beams of tunable gamma-rays may be created via the optimized Compton scattering of pulsed lasers off of ultra-bright, relativistic electron beams. Above 2 MeV, the peak brilliance of such sources can exceed that of the world's largest synchrotrons by more than 15 orders of magnitude and can enable for the first time the efficient pursuit of nuclear science and applications with photon beams, i.e. Nuclear Photonics. Potential applications are numerous and include isotope-specific nuclear materials management, element-specific medical radiography and radiology, non-destructive, isotope-specific, material assay and imaging, precision spectroscopy of nuclear resonances and photon-induced fission. This review covers activities at the Lawrence Livermore National Laboratory related to the design and optimization of mono-energetic, laser-Compton gamma-ray systems and introduces isotope-specific nuclear materials detection and assay applications enabled by them.
Compact X-ray source based on Compton backscattering
Bulyak, E V; Zelinsky, A; Karnaukhov, I; Kononenko, S; Lapshin, V G; Mytsykov, A; Telegin, Yu P; Khodyachikh, A; Shcherbakov, A; Molodkin, V; Nemoshkalenko, V; Shpak, A
2002-01-01
The feasibility study of an intense X-ray source based on the interaction between the electron beam in a compact storage ring and the laser pulse accumulated in an optical resonator is carried out. We propose to reconstruct the 160 MeV electron storage ring N-100, which was shutdown several years ago. A new magnetic lattice will provide a transverse of electron beam size of approx 35 mu m at the point of electron beam-laser beam interaction. The proposed facility is to generate X-ray beams of intensity approx 2.6x10 sup 1 sup 4 s sup - sup 1 and spectral brightness approx 10 sup 1 sup 2 phot/0.1%bw/s/mm sup 2 /mrad sup 2 in the energy range from 10 keV up to 0.5 MeV. These X-ray beam parameters meet the requirements for most of technological and scientific applications. Besides, we plan to use the new facility for studying the laser cooling effect.
Compact X-ray source based on Compton backscattering
Energy Technology Data Exchange (ETDEWEB)
Bulyak, E.; Gladkikh, P.; Zelinsky, A. E-mail: zelinsky@kipt.kharkov.ua; Karnaukhov, I.; Kononenko, S.; Lapshin, V.; Mytsykov, A.; Telegin, Yu.; Khodyachikh, A.; Shcherbakov, A.; Molodkin, V.; Nemoshkalenko, V.; Shpak, A
2002-07-21
The feasibility study of an intense X-ray source based on the interaction between the electron beam in a compact storage ring and the laser pulse accumulated in an optical resonator is carried out. We propose to reconstruct the 160 MeV electron storage ring N-100, which was shutdown several years ago. A new magnetic lattice will provide a transverse of electron beam size of {approx}35 {mu}m at the point of electron beam-laser beam interaction. The proposed facility is to generate X-ray beams of intensity {approx}2.6x10{sup 14} s{sup -1} and spectral brightness {approx}10{sup 12} phot/0.1%bw/s/mm{sup 2}/mrad{sup 2} in the energy range from 10 keV up to 0.5 MeV. These X-ray beam parameters meet the requirements for most of technological and scientific applications. Besides, we plan to use the new facility for studying the laser cooling effect.
X-Band Linac Beam-Line for Medical Compton Scattering X-Ray Source
Dobashi, Katsuhiro; Ebina, Futaro; Fukasawa, Atsushi; Hayano, Hitoshi; Higo, Toshiyasu; Kaneyasu, Tatsuo; Ogino, Haruyuki; Sakamoto, Fumito; Uesaka, Mitsuru; Urakawa, Junji; Yamamoto, Tomohiko
2005-01-01
Compton scattering hard X-ray source for 10~80 keV are under construction using the X-band (11.424 GHz) electron linear accelerator and YAG laser at Nuclear Engineering Research laboratory, University of Tokyo. This work is a part of the national project on the development of advanced compact medical accelerators in Japan. National Institute for Radiological Science is the host institute and U. Tokyo and KEK are working for the X-ray source. Main advantage is to produce tunable monochromatic hard ( 10-80
Development of a sub-MeV X-ray source via Compton backscattering
International Nuclear Information System (INIS)
Kawase, K.; Kando, M.; Hayakawa, T.; Daito, I.; Kondo, S.; Homma, T.; Kameshima, T.; Kotaki, H.; Chen, L.-M.; Fukuda, Y.; Faenov, A.; Shizuma, T.; Shimomura, T.; Yoshida, H.; Hajima, R.; Fujiwara, M.; Bulanov, S.V.; Kimura, T.; Tajima, T.
2011-01-01
At the Kansai Photon Science Institute of the Japan Atomic Energy Agency, we have developed a Compton backscattered X-ray source in the energy region of a few hundred keV. The X-ray source consists of a 150-MeV electron beam, with a pulse duration of 10 ps (rms), accelerated by a Microtron accelerator and an Nd:YAG laser, with a pulse duration of 10 ns (FWHM). In the first trial experiment, the X-ray flux is estimated to be (2.2±1.0)x10 2 photons/pulse. For the actual application of an X-ray source, it is important to increase the generated X-ray flux as much as possible. Thus, for the purpose of increasing the X-ray flux, we have developed the pulse compression system for the Nd:YAG laser via stimulated Brillouin scattering (SBS). The SBS pulse compression has the great advantages of a high conversion efficiency and a simple structure. In this article, we review the present status of the Compton backscattered X-ray source and describe the SBS pulse compression system.
Source of X-ray radiation based on back compton scattering
Bulyak, E V; Karnaukhov, I M; Kononenko, S G; Lapshin, V G; Mytsykov, A O; Telegin, Yu P; Shcherbakov, A A; Zelinsky, Andrey Yurij
2000-01-01
Applicability was studied and previous estimation was done of power X-ray beams generation by backward Compton scattering of a laser photon beam on a cooled down electron beam. The few MeV electron beam circulating in a compact storage ring can be cooled down by interaction of that beam with powerful laser radiation of micrometer wavelength to achieve normalized emittance of 10 sup - sup 7 m. A tunable X-ray source of photons of energy ranging from few keV up to a hundred keV could result from the interaction of the laser beam with a dense electron beam.
Source of X-ray radiation based on back compton scattering
Energy Technology Data Exchange (ETDEWEB)
Bulyak, E.V.; Gladkikh, P.I.; Karnaukhov, I.M.; Kononenko, S.G.; Lapshin, V.I.; Mytsykov, A.O.; Telegin, Yu.N.; Shcherbakov, A.A. E-mail: shcherbakov@kipt.kharkov.ua; Zelinsky, A.Yu
2000-06-21
Applicability was studied and previous estimation was done of power X-ray beams generation by backward Compton scattering of a laser photon beam on a cooled down electron beam. The few MeV electron beam circulating in a compact storage ring can be cooled down by interaction of that beam with powerful laser radiation of micrometer wavelength to achieve normalized emittance of 10{sup -7} m. A tunable X-ray source of photons of energy ranging from few keV up to a hundred keV could result from the interaction of the laser beam with a dense electron beam.
Source of X-ray radiation based on back compton scattering
International Nuclear Information System (INIS)
Bulyak, E.V.; Gladkikh, P.I.; Karnaukhov, I.M.; Kononenko, S.G.; Lapshin, V.I.; Mytsykov, A.O.; Telegin, Yu.N.; Shcherbakov, A.A.; Zelinsky, A.Yu.
2000-01-01
Applicability was studied and previous estimation was done of power X-ray beams generation by backward Compton scattering of a laser photon beam on a cooled down electron beam. The few MeV electron beam circulating in a compact storage ring can be cooled down by interaction of that beam with powerful laser radiation of micrometer wavelength to achieve normalized emittance of 10 -7 m. A tunable X-ray source of photons of energy ranging from few keV up to a hundred keV could result from the interaction of the laser beam with a dense electron beam
Mai, P. M.; Schorlemmer, D.; Page, M.
2012-04-01
Earthquake source inversions image the spatio-temporal rupture evolution on one or more fault planes using seismic and/or geodetic data. Such studies are critically important for earthquake seismology in general, and for advancing seismic hazard analysis in particular, as they reveal earthquake source complexity and help (i) to investigate earthquake mechanics; (ii) to develop spontaneous dynamic rupture models; (iii) to build models for generating rupture realizations for ground-motion simulations. In applications (i - iii), the underlying finite-fault source models are regarded as "data" (input information), but their uncertainties are essentially unknown. After all, source models are obtained from solving an inherently ill-posed inverse problem to which many a priori assumptions and uncertain observations are applied. The Source Inversion Validation (SIV) project is a collaborative effort to better understand the variability between rupture models for a single earthquake (as manifested in the finite-source rupture model database) and to develop robust uncertainty quantification for earthquake source inversions. The SIV project highlights the need to develop a long-standing and rigorous testing platform to examine the current state-of-the-art in earthquake source inversion, and to develop and test novel source inversion approaches. We will review the current status of the SIV project, and report the findings and conclusions of the recent workshops. We will briefly discuss several source-inversion methods, how they treat uncertainties in data, and assess the posterior model uncertainty. Case studies include initial forward-modeling tests on Green's function calculations, and inversion results for synthetic data from spontaneous dynamic crack-like strike-slip earthquake on steeply dipping fault, embedded in a layered crustal velocity-density structure.
THE γ-RAY SPECTRUM OF GEMINGA AND THE INVERSE COMPTON MODEL OF PULSAR HIGH-ENERGY EMISSION
International Nuclear Information System (INIS)
Lyutikov, Maxim
2012-01-01
We reanalyze the Fermi spectra of the Geminga and Vela pulsars. We find that the spectrum of Geminga above the break is well approximated by a simple power law without the exponential cutoff, making Geminga's spectrum similar to that of Crab. Vela's broadband γ-ray spectrum is equally well fit with both the exponential cutoff and the double power-law shapes. In the broadband double power-law fits, for a typical Fermi spectrum of a bright γ-ray pulsar, most of the errors accumulate due to the arbitrary parameterization of the spectral roll-off. In addition, a power law with an exponential cutoff gives an acceptable fit for the underlying double power-law spectrum for a very broad range of parameters, making such fitting procedures insensitive to the underlying Fermi photon spectrum. Our results have important implications for the mechanism of pulsar high-energy emission. A number of observed properties of γ-ray pulsars—i.e., the broken power-law spectra without exponential cutoffs and stretching in the case of Crab beyond the maximal curvature limit, spectral breaks close to or exceeding the maximal breaks due to curvature emission, patterns of the relative intensities of the leading and trailing pulses in the Crab repeated in the X-ray and γ-ray regions, presence of profile peaks at lower energies aligned with γ-ray peaks—all point to the inverse Compton origin of the high-energy emission from majority of pulsars.
Compact tunable Compton x-ray source from laser-plasma accelerator and plasma mirror
International Nuclear Information System (INIS)
Tsai, Hai-En; Wang, Xiaoming; Shaw, Joseph M.; Li, Zhengyan; Zgadzaj, Rafal; Henderson, Watson; Downer, M. C.; Arefiev, Alexey V.; Zhang, Xi; Khudik, V.; Shvets, G.
2015-01-01
We present an in-depth experimental-computational study of the parameters necessary to optimize a tunable, quasi-monoenergetic, efficient, low-background Compton backscattering (CBS) x-ray source that is based on the self-aligned combination of a laser-plasma accelerator (LPA) and a plasma mirror (PM). The main findings are (1) an LPA driven in the blowout regime by 30 TW, 30 fs laser pulses produce not only a high-quality, tunable, quasi-monoenergetic electron beam, but also a high-quality, relativistically intense (a 0 ∼ 1) spent drive pulse that remains stable in profile and intensity over the LPA tuning range. (2) A thin plastic film near the gas jet exit retro-reflects the spent drive pulse efficiently into oncoming electrons to produce CBS x-rays without detectable bremsstrahlung background. Meanwhile, anomalous far-field divergence of the retro-reflected light demonstrates relativistic “denting” of the PM. Exploiting these optimized LPA and PM conditions, we demonstrate quasi-monoenergetic (50% FWHM energy spread), tunable (75–200 KeV) CBS x-rays, characteristics previously achieved only on more powerful laser systems by CBS of a split-off, counter-propagating pulse. Moreover, laser-to-x-ray photon conversion efficiency (∼6 × 10 −12 ) exceeds that of any previous LPA-based quasi-monoenergetic Compton source. Particle-in-cell simulations agree well with the measurements
Mesoscale inversion of carbon sources and sinks
International Nuclear Information System (INIS)
Lauvaux, T.
2008-01-01
Inverse methods at large scales are used to infer the spatial variability of carbon sources and sinks over the continents but their uncertainties remain large. Atmospheric concentrations integrate the surface flux variability but atmospheric transport models at low resolution are not able to simulate properly the local atmospheric dynamics at the measurement sites. However, the inverse estimates are more representative of the large spatial heterogeneity of the ecosystems compared to direct flux measurements. Top-down and bottom-up methods that aim at quantifying the carbon exchanges between the surface and the atmosphere correspond to different scales and are not easily comparable. During this phD, a mesoscale inverse system was developed to correct carbon fluxes at 8 km resolution. The high resolution transport model MesoNH was used to simulate accurately the variability of the atmospheric concentrations, which allowed us to reduce the uncertainty of the retrieved fluxes. All the measurements used here were observed during the intensive regional campaign CERES of May and June 2005, during which several instrumented towers measured CO 2 concentrations and fluxes in the South West of France. Airborne measurements allowed us to observe concentrations at high altitude but also CO 2 surface fluxes over large parts of the domain. First, the capacity of the inverse system to correct the CO 2 fluxes was estimated using pseudo-data experiments. The largest fraction of the concentration variability was attributed to regional surface fluxes over an area of about 300 km around the site locations depending on the meteorological conditions. Second, an ensemble of simulations allowed us to define the spatial and temporal structures of the transport errors. Finally, the inverse fluxes at 8 km resolution were compared to direct flux measurements. The inverse system has been validated in space and time and showed an improvement of the first guess fluxes from a vegetation model
Grubsky, Victor; Romanoov, Volodymyr; Shoemaker, Keith; Patton, Edward Matthew; Jannson, Tomasz
2016-02-02
A Compton tomography system comprises an x-ray source configured to produce a planar x-ray beam. The beam irradiates a slice of an object to be imaged, producing Compton-scattered x-rays. The Compton-scattered x-rays are imaged by an x-ray camera. Translation of the object with respect to the source and camera or vice versa allows three-dimensional object imaging.
Source Estimation by Full Wave Form Inversion
Energy Technology Data Exchange (ETDEWEB)
Sjögreen, Björn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Petersson, N. Anders [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing
2013-08-07
Given time-dependent ground motion recordings at a number of receiver stations, we solve the inverse problem for estimating the parameters of the seismic source. The source is modeled as a point moment tensor source, characterized by its location, moment tensor components, the start time, and frequency parameter (rise time) of its source time function. In total, there are 11 unknown parameters. We use a non-linear conjugate gradient algorithm to minimize the full waveform misfit between observed and computed ground motions at the receiver stations. An important underlying assumption of the minimization problem is that the wave propagation is accurately described by the elastic wave equation in a heterogeneous isotropic material. We use a fourth order accurate finite difference method, developed in [12], to evolve the waves forwards in time. The adjoint wave equation corresponding to the discretized elastic wave equation is used to compute the gradient of the misfit, which is needed by the non-linear conjugated minimization algorithm. A new source point moment source discretization is derived that guarantees that the Hessian of the misfit is a continuous function of the source location. An efficient approach for calculating the Hessian is also presented. We show how the Hessian can be used to scale the problem to improve the convergence of the non-linear conjugated gradient algorithm. Numerical experiments are presented for estimating the source parameters from synthetic data in a layer over half-space problem (LOH.1), illustrating rapid convergence of the proposed approach.
Radiation Source Mapping with Bayesian Inverse Methods
Hykes, Joshua Michael
We present a method to map the spectral and spatial distributions of radioactive sources using a small number of detectors. Locating and identifying radioactive materials is important for border monitoring, accounting for special nuclear material in processing facilities, and in clean-up operations. Most methods to analyze these problems make restrictive assumptions about the distribution of the source. In contrast, the source-mapping method presented here allows an arbitrary three-dimensional distribution in space and a flexible group and gamma peak distribution in energy. To apply the method, the system's geometry and materials must be known. A probabilistic Bayesian approach is used to solve the resulting inverse problem (IP) since the system of equations is ill-posed. The probabilistic approach also provides estimates of the confidence in the final source map prediction. A set of adjoint flux, discrete ordinates solutions, obtained in this work by the Denovo code, are required to efficiently compute detector responses from a candidate source distribution. These adjoint fluxes are then used to form the linear model to map the state space to the response space. The test for the method is simultaneously locating a set of 137Cs and 60Co gamma sources in an empty room. This test problem is solved using synthetic measurements generated by a Monte Carlo (MCNP) model and using experimental measurements that we collected for this purpose. With the synthetic data, the predicted source distributions identified the locations of the sources to within tens of centimeters, in a room with an approximately four-by-four meter floor plan. Most of the predicted source intensities were within a factor of ten of their true value. The chi-square value of the predicted source was within a factor of five from the expected value based on the number of measurements employed. With a favorable uniform initial guess, the predicted source map was nearly identical to the true distribution
Development of a compact x-ray source via laser compton scattering at KEK-LUCX
International Nuclear Information System (INIS)
Sakaue, Kazuyuki; Washio, Masakazu; Aryshev, Alexander; Araki, Sakae; Urakawa, Junji; Terunuma, Nobuhiro; Fukuda, Masafumi; Miyoshi, Toshinobu; Takeda, Ayaki
2013-01-01
The compact X-ray source based on Laser-Compton scattering (LCS) has been developed at LUCX (Laser Undulator Compact X-ray source) facility in KEK. The multi-bunch high quality electron beam produced by a standing wave 3.6 cell RF Gun and accelerated by the followed S-band normal conducting 12 cells standing wave 'Booster' linear accelerator is scattered off the laser beam stored in the optical cavity. The 4-mirror planar optical cavity with finesse 335 is used. The MCP (Micro-Channer Plate) detector as well as SOI (Silicon-On-Insulator) pixel sensor was used for scattered X-ray detection. The SOI pixel sensor has been used for LCS X-ray detection for the first time and has demonstrated high spatial resolution and high SN ratio X-ray detection that in turn lead to clearest X-ray images achieved by LCS X-ray. We have also achieved generation of 6.38x10 6 ph./sec., which is more than 30 times larger LCS X-ray flux in comparison with our previous results. The complete details of LUCX LCS X-ray source, specifications of both electron and laser beams, and the results of LCS X-ray generation experiments are reported in this paper. (author)
X-band RF gun and linac for medical Compton scattering X-ray source
Dobashi, Katsuhito; Uesaka, Mitsuru; Fukasawa, Atsushi; Sakamoto, Fumito; Ebina, Futaro; Ogino, Haruyuki; Urakawa, Junji; Higo, Toshiyasu; Akemoto, Mitsuo; Hayano, Hitoshi; Nakagawa, Keiichi
2004-12-01
Compton scattering hard X-ray source for 10-80 keV are under construction using the X-band (11.424 GHz) electron linear accelerator and YAG laser at Nuclear Engineering Research laboratory, University of Tokyo. This work is a part of the national project on the development of advanced compact medical accelerators in Japan. National Institute for Radiological Science is the host institute and U.Tokyo and KEK are working for the X-ray source. Main advantage is to produce tunable monochromatic hard (10-80 keV) X-rays with the intensities of 108-1010 photons/s (at several stages) and the table-top size. Second important aspect is to reduce noise radiation at a beam dump by adopting the deceleration of electrons after the Compton scattering. This realizes one beamline of a 3rd generation SR source at small facilities without heavy shielding. The final goal is that the linac and laser are installed on the moving gantry. We have designed the X-band (11.424 GHz) traveling-wave-type linac for the purpose. Numerical consideration by CAIN code and luminosity calculation are performed to estimate the X-ray yield. X-band thermionic-cathode RF-gun and RDS(Round Detuned Structure)-type X-band accelerating structure are applied to generate 50 MeV electron beam with 20 pC microbunches (104) for 1 microsecond RF macro-pulse. The X-ray yield by the electron beam and Q-switch Nd:YAG laser of 2 J/10 ns is 107 photons/RF-pulse (108 photons/sec at 10 pps). We design to adopt a technique of laser circulation to increase the X-ray yield up to 109 photons/pulse (1010 photons/s). 50 MW X-band klystron and compact modulator have been constructed and now under tuning. The construction of the whole system has started. X-ray generation and medical application will be performed in the early next year.
X-band RF gun and linac for medical Compton scattering X-ray source
International Nuclear Information System (INIS)
Dobashi, Katsuhito; Uesaka, Mitsuru; Fukasawa, Atsushi; Sakamoto, Fumito; Ebina, Futaro; Ogino, Haruyuki; Urakawa, Junji; Higo, Toshiyasu; Akemoto, Mitsuo; Hayano, Hitoshi; Nakagawa, Keiichi
2004-01-01
Compton scattering hard X-ray source for 10-80 keV are under construction using the X-band (11.424 GHz) electron linear accelerator and YAG laser at Nuclear Engineering Research laboratory, University of Tokyo. This work is a part of the national project on the development of advanced compact medical accelerators in Japan. National Institute for Radiological Science is the host institute and U.Tokyo and KEK are working for the X-ray source. Main advantage is to produce tunable monochromatic hard (10-80 keV) X-rays with the intensities of 108-1010 photons/s (at several stages) and the table-top size. Second important aspect is to reduce noise radiation at a beam dump by adopting the deceleration of electrons after the Compton scattering. This realizes one beamline of a 3rd generation SR source at small facilities without heavy shielding. The final goal is that the linac and laser are installed on the moving gantry. We have designed the X-band (11.424 GHz) traveling-wave-type linac for the purpose. Numerical consideration by CAIN code and luminosity calculation are performed to estimate the X-ray yield. X-band thermionic-cathode RF-gun and RDS(Round Detuned Structure)-type X-band accelerating structure are applied to generate 50 MeV electron beam with 20 pC microbunches (104) for 1 microsecond RF macro-pulse. The X-ray yield by the electron beam and Q-switch Nd:YAG laser of 2 J/10 ns is 107 photons/RF-pulse (108 photons/sec at 10 pps). We design to adopt a technique of laser circulation to increase the X-ray yield up to 109 photons/pulse (1010 photons/s). 50 MW X-band klystron and compact modulator have been constructed and now under tuning. The construction of the whole system has started. X-ray generation and medical application will be performed in the early next year
Source-independent elastic waveform inversion using a logarithmic wavefield
Choi, Yun Seok; Min, Dong Joon
2012-01-01
The logarithmic waveform inversion has been widely developed and applied to some synthetic and real data. In most logarithmic waveform inversion algorithms, the subsurface velocities are updated along with the source estimation. To avoid estimating
Source inversion in the full-wave tomography; Full wave tomography ni okeru source inversion
Energy Technology Data Exchange (ETDEWEB)
Tsuchiya, T [DIA Consultants Co. Ltd., Tokyo (Japan)
1997-10-22
In order to consider effects of characteristics of a vibration source in the full-wave tomography (FWT), a study has been performed on a method to invert vibration source parameters together with V(p)/V(s) distribution. The study has expanded an analysis method which uses as the basic the gradient method invented by Tarantola and the partial space method invented by Sambridge, and conducted numerical experiments. The experiment No. 1 has performed inversion of only the vibration source parameters, and the experiment No. 2 has executed simultaneous inversion of the V(p)/V(s) distribution and the vibration source parameters. The result of the discussions revealed that and effective analytical procedure would be as follows: in order to predict maximum stress, the average vibration source parameters and the property parameters are first inverted simultaneously; in order to estimate each vibration source parameter at a high accuracy, the property parameters are fixed, and each vibration source parameter is inverted individually; and the derived vibration source parameters are fixed, and the property parameters are again inverted from the initial values. 5 figs., 2 tabs.
Source-independent elastic waveform inversion using a logarithmic wavefield
Choi, Yun Seok
2012-01-01
The logarithmic waveform inversion has been widely developed and applied to some synthetic and real data. In most logarithmic waveform inversion algorithms, the subsurface velocities are updated along with the source estimation. To avoid estimating the source wavelet in the logarithmic waveform inversion, we developed a source-independent logarithmic waveform inversion algorithm. In this inversion algorithm, we first normalize the wavefields with the reference wavefield to remove the source wavelet, and then take the logarithm of the normalized wavefields. Based on the properties of the logarithm, we define three types of misfit functions using the following methods: combination of amplitude and phase, amplitude-only, and phase-only. In the inversion, the gradient is computed using the back-propagation formula without directly calculating the Jacobian matrix. We apply our algorithm to noise-free and noise-added synthetic data generated for the modified version of elastic Marmousi2 model, and compare the results with those of the source-estimation logarithmic waveform inversion. For the noise-free data, the source-independent algorithms yield velocity models close to true velocity models. For random-noise data, the source-estimation logarithmic waveform inversion yields better results than the source-independent method, whereas for coherent-noise data, the results are reversed. Numerical results show that the source-independent and source-estimation logarithmic waveform inversion methods have their own merits for random- and coherent-noise data. © 2011.
Small, Ian; Blundell, Katherine M.; Lehmer, B. D.; Alexander, D. M.
2012-01-01
We report the detection of extended X-ray emission around two powerful radio galaxies at z approx. 3.6 (4C 03.24 and 4C 19.71) and use these to investigate the origin of extended, inverse Compton (IC) powered X-ray halos at high redshifts. The halos have X-ray luminosities of L(sub X) approx. 3 x 10(exp 44) erg/s and sizes of approx.60 kpc. Their morphologies are broadly similar to the approx.60 kpc long radio lobes around these galaxies suggesting they are formed from IC scattering by relativistic electrons in the radio lobes, of either cosmic microwave background (CMB) photons or far-infrared photons from the dust-obscured starbursts in these galaxies. These observations double the number of z > 3 radio galaxies with X-ray-detected IC halos. We compare the IC X-ray-to-radio luminosity ratios for the two new detections to the two previously detected z approx. 3.8 radio galaxies. Given the similar redshifts, we would expect comparable X-ray IC luminosities if millimeter photons from the CMB are the dominant seed field for the IC emission (assuming all four galaxies have similar ages and jet powers). Instead we see that the two z approx. 3.6 radio galaxies, which are 4 fainter in the far-infrared than those at z 3.8, also have approx.4x fainter X-ray IC emission. Including data for a further six z > or approx. 2 radio sources with detected IC X-ray halos from the literature, we suggest that in the more compact, majority of radio sources, those with lobe sizes < or approx.100-200 kpc, the bulk of the IC emission may be driven by scattering of locally produced far-infrared photons from luminous, dust-obscured starbursts within these galaxies, rather than millimeter photons from the CMB. The resulting X-ray emission appears sufficient to ionize the gas on approx.100-200 kpc scales around these systems and thus helps form the extended, kinematically quiescent Ly(alpha) emission line halos found around some of these systems. The starburst and active galactic nucleus
International Nuclear Information System (INIS)
Carvalho Campos, J.S. de.
1984-01-01
The project and construction of a Compton current detector, with cylindrical geometry using teflon as dielectric material; for electromagnetic radiation in range energy between 10 KeV and 2 MeV are described. The measurements of Compton current in teflon were obtained using an electrometer. The Compton current was promoted by photon flux proceeding from X ray sources (MG 150 Muller device) and gamma rays of 60 Co. The theory elaborated to explain the experimental results is shown. The calibration curves for accumulated charge and current in detector in function of exposition rates were obtained. (M.C.K.) [pt
Arthur H. Compton and Compton Scattering
dropdown arrow Site Map A-Z Index Menu Synopsis Arthur H. Compton and Compton Scattering Resources with Additional Information * Compton Honored * Compton Scattering Arthur H. Compton Courtesy of Lawrence Berkeley , 1923 Establishing Site X: Letter, Arthur H. Compton to Enrico Fermi, September 14, 1942, DOE Technical
Energy Technology Data Exchange (ETDEWEB)
Smail, Ian [Institute for Computational Cosmology, Durham University, South Road, Durham DH1 3LE (United Kingdom); Blundell, Katherine M. [Department of Astrophysics, University of Oxford, Keble Road, Oxford OX1 3RH (United Kingdom); Lehmer, B. D. [Department of Physics and Astronomy, The Johns Hopkins University, Homewood Campus, Baltimore, MD 21218 (United States); Alexander, D. M. [Department of Physics, Durham University, South Road, Durham DH1 3LE (United Kingdom)
2012-12-01
We report the detection of extended X-ray emission around two powerful radio galaxies at z {approx} 3.6 (4C 03.24 and 4C 19.71) and use these to investigate the origin of extended, inverse Compton (IC) powered X-ray halos at high redshifts. The halos have X-ray luminosities of L {sub X} {approx} 3 Multiplication-Sign 10{sup 44} erg s{sup -1} and sizes of {approx}60 kpc. Their morphologies are broadly similar to the {approx}60 kpc long radio lobes around these galaxies suggesting they are formed from IC scattering by relativistic electrons in the radio lobes, of either cosmic microwave background (CMB) photons or far-infrared photons from the dust-obscured starbursts in these galaxies. These observations double the number of z > 3 radio galaxies with X-ray-detected IC halos. We compare the IC X-ray-to-radio luminosity ratios for the two new detections to the two previously detected z {approx} 3.8 radio galaxies. Given the similar redshifts, we would expect comparable X-ray IC luminosities if millimeter photons from the CMB are the dominant seed field for the IC emission (assuming all four galaxies have similar ages and jet powers). Instead we see that the two z {approx} 3.6 radio galaxies, which are {approx}4 Multiplication-Sign fainter in the far-infrared than those at z {approx} 3.8, also have {approx}4 Multiplication-Sign fainter X-ray IC emission. Including data for a further six z {approx}> 2 radio sources with detected IC X-ray halos from the literature, we suggest that in the more compact, majority of radio sources, those with lobe sizes {approx}<100-200 kpc, the bulk of the IC emission may be driven by scattering of locally produced far-infrared photons from luminous, dust-obscured starbursts within these galaxies, rather than millimeter photons from the CMB. The resulting X-ray emission appears sufficient to ionize the gas on {approx}100-200 kpc scales around these systems and thus helps form the extended, kinematically quiescent Ly{alpha} emission line
The Earthquake‐Source Inversion Validation (SIV) Project
Mai, Paul Martin; Schorlemmer, Danijel; Page, Morgan; Ampuero, Jean‐Paul; Asano, Kimiyuki; Causse, Mathieu; Custodio, Susana; Fan, Wenyuan; Festa, Gaetano; Galis, Martin; Gallovic, Frantisek; Imperatori, Walter; Kä ser, Martin; Malytskyy, Dmytro; Okuwaki, Ryo; Pollitz, Fred; Passone, Luca; Razafindrakoto, Hoby; Sekiguchi, Haruko; Song, Seok Goo; Somala, Surendra N.; Thingbaijam, Kiran Kumar; Twardzik, Cedric; van Driel, Martin; Vyas, Jagdish Chandra; Wang, Rongjiang; Yagi, Yuji; Zielke, Olaf
2016-01-01
Finite-fault earthquake source inversions infer the (time-dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, multiple source models for the same earthquake, obtained by different research teams, often exhibit remarkable dissimilarities. To address the uncertainties in earthquake-source inversion methods and to understand strengths and weaknesses of the various approaches used, the Source Inversion Validation (SIV) project conducts a set of forward-modeling exercises and inversion benchmarks. In this article, we describe the SIV strategy, the initial benchmarks, and current SIV results. Furthermore, we apply statistical tools for quantitative waveform comparison and for investigating source-model (dis)similarities that enable us to rank the solutions, and to identify particularly promising source inversion approaches. All SIV exercises (with related data and descriptions) and statistical comparison tools are available via an online collaboration platform, and we encourage source modelers to use the SIV benchmarks for developing and testing new methods. We envision that the SIV efforts will lead to new developments for tackling the earthquake-source imaging problem.
The Earthquake‐Source Inversion Validation (SIV) Project
Mai, Paul Martin
2016-04-27
Finite-fault earthquake source inversions infer the (time-dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, multiple source models for the same earthquake, obtained by different research teams, often exhibit remarkable dissimilarities. To address the uncertainties in earthquake-source inversion methods and to understand strengths and weaknesses of the various approaches used, the Source Inversion Validation (SIV) project conducts a set of forward-modeling exercises and inversion benchmarks. In this article, we describe the SIV strategy, the initial benchmarks, and current SIV results. Furthermore, we apply statistical tools for quantitative waveform comparison and for investigating source-model (dis)similarities that enable us to rank the solutions, and to identify particularly promising source inversion approaches. All SIV exercises (with related data and descriptions) and statistical comparison tools are available via an online collaboration platform, and we encourage source modelers to use the SIV benchmarks for developing and testing new methods. We envision that the SIV efforts will lead to new developments for tackling the earthquake-source imaging problem.
The Earthquake‐Source Inversion Validation (SIV) Project
Mai, P. Martin; Schorlemmer, Danijel; Page, Morgan T.; Ampuero, Jean-Paul; Asano, Kimiyuki; Causse, Mathieu; Custodio, Susana; Fan, Wenyuan; Festa, Gaetano; Galis, Martin; Gallovic, Frantisek; Imperatori, Walter; Käser, Martin; Malytskyy, Dmytro; Okuwaki, Ryo; Pollitz, Fred; Passone, Luca; Razafindrakoto, Hoby N. T.; Sekiguchi, Haruko; Song, Seok Goo; Somala, Surendra N.; Thingbaijam, Kiran K. S.; Twardzik, Cedric; van Driel, Martin; Vyas, Jagdish C.; Wang, Rongjiang; Yagi, Yuji; Zielke, Olaf
2016-01-01
Finite‐fault earthquake source inversions infer the (time‐dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, multiple source models for the same earthquake, obtained by different research teams, often exhibit remarkable dissimilarities. To address the uncertainties in earthquake‐source inversion methods and to understand strengths and weaknesses of the various approaches used, the Source Inversion Validation (SIV) project conducts a set of forward‐modeling exercises and inversion benchmarks. In this article, we describe the SIV strategy, the initial benchmarks, and current SIV results. Furthermore, we apply statistical tools for quantitative waveform comparison and for investigating source‐model (dis)similarities that enable us to rank the solutions, and to identify particularly promising source inversion approaches. All SIV exercises (with related data and descriptions) and statistical comparison tools are available via an online collaboration platform, and we encourage source modelers to use the SIV benchmarks for developing and testing new methods. We envision that the SIV efforts will lead to new developments for tackling the earthquake‐source imaging problem.
The Earthquake Source Inversion Validation (SIV) - Project: Summary, Status, Outlook
Mai, P. M.
2017-12-01
Finite-fault earthquake source inversions infer the (time-dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, this kinematic source inversion is ill-posed and returns non-unique solutions, as seen for instance in multiple source models for the same earthquake, obtained by different research teams, that often exhibit remarkable dissimilarities. To address the uncertainties in earthquake-source inversions and to understand strengths and weaknesses of various methods, the Source Inversion Validation (SIV) project developed a set of forward-modeling exercises and inversion benchmarks. Several research teams then use these validation exercises to test their codes and methods, but also to develop and benchmark new approaches. In this presentation I will summarize the SIV strategy, the existing benchmark exercises and corresponding results. Using various waveform-misfit criteria and newly developed statistical comparison tools to quantify source-model (dis)similarities, the SIV platforms is able to rank solutions and identify particularly promising source inversion approaches. Existing SIV exercises (with related data and descriptions) and all computational tools remain available via the open online collaboration platform; additional exercises and benchmark tests will be uploaded once they are fully developed. I encourage source modelers to use the SIV benchmarks for developing and testing new methods. The SIV efforts have already led to several promising new techniques for tackling the earthquake-source imaging problem. I expect that future SIV benchmarks will provide further innovations and insights into earthquake source kinematics that will ultimately help to better understand the dynamics of the rupture process.
Liu, Lu; Fei, Tong; Luo, Yi; Guo, Bowen
2017-01-01
This paper presents a workflow for near-surface velocity automatic estimation using the early arrivals of seismic data. This workflow comprises two methods, source-domain full traveltime inversion (FTI) and early-arrival waveform inversion. Source
Point sources and multipoles in inverse scattering theory
Potthast, Roland
2001-01-01
Over the last twenty years, the growing availability of computing power has had an enormous impact on the classical fields of direct and inverse scattering. The study of inverse scattering, in particular, has developed rapidly with the ability to perform computational simulations of scattering processes and led to remarkable advances in a range of applications, from medical imaging and radar to remote sensing and seismic exploration. Point Sources and Multipoles in Inverse Scattering Theory provides a survey of recent developments in inverse acoustic and electromagnetic scattering theory. Focusing on methods developed over the last six years by Colton, Kirsch, and the author, this treatment uses point sources combined with several far-reaching techniques to obtain qualitative reconstruction methods. The author addresses questions of uniqueness, stability, and reconstructions for both two-and three-dimensional problems.With interest in extracting information about an object through scattered waves at an all-ti...
Fully probabilistic seismic source inversion – Part 1: Efficient parameterisation
Directory of Open Access Journals (Sweden)
S. C. Stähler
2014-11-01
Full Text Available Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters themselves but also estimates of their uncertainties are of great practical importance. Probabilistic source inversion (Bayesian inference is very adapted to this challenge, provided that the parameter space can be chosen small enough to make Bayesian sampling computationally feasible. We propose a framework for PRobabilistic Inference of Seismic source Mechanisms (PRISM that parameterises and samples earthquake depth, moment tensor, and source time function efficiently by using information from previous non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible.
Regularized inversion of controlled source and earthquake data
International Nuclear Information System (INIS)
Ramachandran, Kumar
2012-01-01
Estimation of the seismic velocity structure of the Earth's crust and upper mantle from travel-time data has advanced greatly in recent years. Forward modelling trial-and-error methods have been superseded by tomographic methods which allow more objective analysis of large two-dimensional and three-dimensional refraction and/or reflection data sets. The fundamental purpose of travel-time tomography is to determine the velocity structure of a medium by analysing the time it takes for a wave generated at a source point within the medium to arrive at a distribution of receiver points. Tomographic inversion of first-arrival travel-time data is a nonlinear problem since both the velocity of the medium and ray paths in the medium are unknown. The solution for such a problem is typically obtained by repeated application of linearized inversion. Regularization of the nonlinear problem reduces the ill posedness inherent in the tomographic inversion due to the under-determined nature of the problem and the inconsistencies in the observed data. This paper discusses the theory of regularized inversion for joint inversion of controlled source and earthquake data, and results from synthetic data testing and application to real data. The results obtained from tomographic inversion of synthetic data and real data from the northern Cascadia subduction zone show that the velocity model and hypocentral parameters can be efficiently estimated using this approach. (paper)
Choi, Yun Seok; Alkhalifah, Tariq Ali
2011-01-01
Full waveform inversion requires a good estimation of the source wavelet to improve our chances of a successful inversion. This is especially true for an encoded multisource time-domain implementation, which, conventionally, requires separate
Strategies for source space limitation in tomographic inverse procedures
International Nuclear Information System (INIS)
George, J.S.; Lewis, P.S.; Schlitt, H.A.; Kaplan, L.; Gorodnitsky, I.; Wood, C.C.
1994-01-01
The use of magnetic recordings for localization of neural activity requires the solution of an ill-posed inverse problem: i.e. the determination of the spatial configuration, orientation, and timecourse of the currents that give rise to a particular observed field distribution. In its general form, this inverse problem has no unique solution; due to superposition and the existence of silent source configurations, a particular magnetic field distribution at the head surface could be produced by any number of possible source configurations. However, by making assumptions concerning the number and properties of neural sources, it is possible to use numerical minimization techniques to determine the source model parameters that best account for the experimental observations while satisfying numerical or physical criteria. In this paper the authors describe progress on the development and validation of inverse procedures that produce distributed estimates of neuronal currents. The goal is to produce a temporal sequence of 3-D tomographic reconstructions of the spatial patterns of neural activation. Such approaches have a number of advantages, in principle. Because they do not require estimates of model order and parameter values (beyond specification of the source space), they minimize the influence of investigator decisions and are suitable for automated analyses. These techniques also allow localization of sources that are not point-like; experimental studies of cognitive processes and of spontaneous brain activity are likely to require distributed source models
International Nuclear Information System (INIS)
Christillin, P.
1986-01-01
The theory of nuclear Compton scattering is reformulated with explicit consideration of both virtual and real pionic degrees of freedom. The effects due to low-lying nuclear states, to seagull terms, to pion condensation and to the Δ dynamics in the nucleus and their interplay in the different energy regions are examined. It is shown that all corrections to the one-body terms, of diffractive behaviour determined by the nuclear form factor, have an effective two-body character. The possibility of using Compton scattering as a complementary source of information about nuclear dynamics is restressed. (author)
Solution to the inversely stated transient source-receptor problem
International Nuclear Information System (INIS)
Sajo, E.; Sheff, J.R.
1995-01-01
Transient source-receptor problems are traditionally handled via the Boltzmann equation or through one of its variants. In the atmospheric transport of pollutants, meteorological uncertainties in the planetary boundary layer render only a few approximations to the Boltzmann equation useful. Often, due to the high number of unknowns, the atmospheric source-receptor problem is ill-posed. Moreover, models to estimate downwind concentration invariably assume that the source term is known. In this paper, an inverse methodology is developed, based on downwind measurement of concentration and that of meterological parameters to estimate the source term
Can earthquake source inversion benefit from rotational ground motion observations?
Igel, H.; Donner, S.; Reinwald, M.; Bernauer, M.; Wassermann, J. M.; Fichtner, A.
2015-12-01
With the prospects of instruments to observe rotational ground motions in a wide frequency and amplitude range in the near future we engage in the question how this type of ground motion observation can be used to solve seismic inverse problems. Here, we focus on the question, whether point or finite source inversions can benefit from additional observations of rotational motions. In an attempt to be fair we compare observations from a surface seismic network with N 3-component translational sensors (classic seismometers) with those obtained with N/2 6-component sensors (with additional colocated 3-component rotational motions). Thus we keep the overall number of traces constant. Synthetic seismograms are calculated for known point- or finite-source properties. The corresponding inverse problem is posed in a probabilistic way using the Shannon information content as a measure how the observations constrain the seismic source properties. The results show that with the 6-C subnetworks the source properties are not only equally well recovered (even that would be benefitial because of the substantially reduced logistics installing N/2 sensors) but statistically significant some source properties are almost always better resolved. We assume that this can be attributed to the fact the (in particular vertical) gradient information is contained in the additional rotational motion components. We compare these effects for strike-slip and normal-faulting type sources. Thus the answer to the question raised is a definite "yes". The challenge now is to demonstrate these effects on real data.
Point source reconstruction principle of linear inverse problems
International Nuclear Information System (INIS)
Terazono, Yasushi; Matani, Ayumu; Fujimaki, Norio; Murata, Tsutomu
2010-01-01
Exact point source reconstruction for underdetermined linear inverse problems with a block-wise structure was studied. In a block-wise problem, elements of a source vector are partitioned into blocks. Accordingly, a leadfield matrix, which represents the forward observation process, is also partitioned into blocks. A point source is a source having only one nonzero block. An example of such a problem is current distribution estimation in electroencephalography and magnetoencephalography, where a source vector represents a vector field and a point source represents a single current dipole. In this study, the block-wise norm, a block-wise extension of the l p -norm, was defined as the family of cost functions of the inverse method. The main result is that a set of three conditions was found to be necessary and sufficient for block-wise norm minimization to ensure exact point source reconstruction for any leadfield matrix that admit such reconstruction. The block-wise norm that satisfies the conditions is the sum of the cost of all the observations of source blocks, or in other words, the block-wisely extended leadfield-weighted l 1 -norm. Additional results are that minimization of such a norm always provides block-wisely sparse solutions and that its solutions form cones in source space
A finite-difference contrast source inversion method
International Nuclear Information System (INIS)
Abubakar, A; Hu, W; Habashy, T M; Van den Berg, P M
2008-01-01
We present a contrast source inversion (CSI) algorithm using a finite-difference (FD) approach as its backbone for reconstructing the unknown material properties of inhomogeneous objects embedded in a known inhomogeneous background medium. Unlike the CSI method using the integral equation (IE) approach, the FD-CSI method can readily employ an arbitrary inhomogeneous medium as its background. The ability to use an inhomogeneous background medium has made this algorithm very suitable to be used in through-wall imaging and time-lapse inversion applications. Similar to the IE-CSI algorithm the unknown contrast sources and contrast function are updated alternately to reconstruct the unknown objects without requiring the solution of the full forward problem at each iteration step in the optimization process. The FD solver is formulated in the frequency domain and it is equipped with a perfectly matched layer (PML) absorbing boundary condition. The FD operator used in the FD-CSI method is only dependent on the background medium and the frequency of operation, thus it does not change throughout the inversion process. Therefore, at least for the two-dimensional (2D) configurations, where the size of the stiffness matrix is manageable, the FD stiffness matrix can be inverted using a non-iterative inversion matrix approach such as a Gauss elimination method for the sparse matrix. In this case, an LU decomposition needs to be done only once and can then be reused for multiple source positions and in successive iterations of the inversion. Numerical experiments show that this FD-CSI algorithm has an excellent performance for inverting inhomogeneous objects embedded in an inhomogeneous background medium
Retrieving global aerosol sources from satellites using inverse modeling
Directory of Open Access Journals (Sweden)
O. Dubovik
2008-01-01
Full Text Available Understanding aerosol effects on global climate requires knowing the global distribution of tropospheric aerosols. By accounting for aerosol sources, transports, and removal processes, chemical transport models simulate the global aerosol distribution using archived meteorological fields. We develop an algorithm for retrieving global aerosol sources from satellite observations of aerosol distribution by inverting the GOCART aerosol transport model.
The inversion is based on a generalized, multi-term least-squares-type fitting, allowing flexible selection and refinement of a priori algorithm constraints. For example, limitations can be placed on retrieved quantity partial derivatives, to constrain global aerosol emission space and time variability in the results. Similarities and differences between commonly used inverse modeling and remote sensing techniques are analyzed. To retain the high space and time resolution of long-period, global observational records, the algorithm is expressed using adjoint operators.
Successful global aerosol emission retrievals at 2°×2.5 resolution were obtained by inverting GOCART aerosol transport model output, assuming constant emissions over the diurnal cycle, and neglecting aerosol compositional differences. In addition, fine and coarse mode aerosol emission sources were inverted separately from MODIS fine and coarse mode aerosol optical thickness data, respectively. These assumptions are justified, based on observational coverage and accuracy limitations, producing valuable aerosol source locations and emission strengths. From two weeks of daily MODIS observations during August 2000, the global placement of fine mode aerosol sources agreed with available independent knowledge, even though the inverse method did not use any a priori information about aerosol sources, and was initialized with a "zero aerosol emission" assumption. Retrieving coarse mode aerosol emissions was less successful
Fast Bayesian Optimal Experimental Design for Seismic Source Inversion
Long, Quan; Motamed, Mohammad; Tempone, Raul
2016-01-01
We develop a fast method for optimally designing experiments [1] in the context of statistical seismic source inversion [2]. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by the elastic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the true parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem.
Fast Bayesian optimal experimental design for seismic source inversion
Long, Quan
2015-07-01
We develop a fast method for optimally designing experiments in the context of statistical seismic source inversion. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by elastodynamic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the "true" parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem. © 2015 Elsevier B.V.
Fast Bayesian Optimal Experimental Design for Seismic Source Inversion
Long, Quan
2016-01-06
We develop a fast method for optimally designing experiments [1] in the context of statistical seismic source inversion [2]. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by the elastic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the true parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem.
The NuSTAR Extragalactic Surveys: Source Catalog and the Compton-thick Fraction in the UDS Field
Masini, A.; Civano, F.; Comastri, A.; Fornasini, F.; Ballantyne, D. R.; Lansbury, G. B.; Treister, E.; Alexander, D. M.; Boorman, P. G.; Brandt, W. N.; Farrah, D.; Gandhi, P.; Harrison, F. A.; Hickox, R. C.; Kocevski, D. D.; Lanz, L.; Marchesi, S.; Puccetti, S.; Ricci, C.; Saez, C.; Stern, D.; Zappacosta, L.
2018-03-01
We present the results and the source catalog of the NuSTAR survey in the UKIDSS Ultra Deep Survey (UDS) field, bridging the gap in depth and area between NuSTAR’s ECDFS and COSMOS surveys. The survey covers a ∼0.6 deg2 area of the field for a total observing time of ∼1.75 Ms, to a half-area depth of ∼155 ks corrected for vignetting at 3–24 keV, and reaching sensitivity limits at half-area in the full (3–24 keV), soft (3–8 keV), and hard (8–24 keV) bands of 2.2 × 10‑14 erg cm‑2 s‑1, 1.0 × 10‑14 erg cm‑2 s‑1, and 2.7 × 10‑14 erg cm‑2 s‑1, respectively. A total of 67 sources are detected in at least one of the three bands, 56 of which have a robust optical redshift with a median of ∼ 1.1. Through a broadband (0.5–24 keV) spectral analysis of the whole sample combined with the NuSTAR hardness ratios, we compute the observed Compton-thick (CT; N H > 1024 cm‑2) fraction. Taking into account the uncertainties on each N H measurement, the final number of CT sources is 6.8 ± 1.2. This corresponds to an observed CT fraction of 11.5% ± 2.0%, providing a robust lower limit to the intrinsic fraction of CT active galactic nuclei and placing constraints on cosmic X-ray background synthesis models.
Review on solving the inverse problem in EEG source analysis
Directory of Open Access Journals (Sweden)
Fabri Simon G
2008-11-01
Full Text Available Abstract In this primer, we give a review of the inverse problem for EEG source localization. This is intended for the researchers new in the field to get insight in the state-of-the-art techniques used to find approximate solutions of the brain sources giving rise to a scalp potential recording. Furthermore, a review of the performance results of the different techniques is provided to compare these different inverse solutions. The authors also include the results of a Monte-Carlo analysis which they performed to compare four non parametric algorithms and hence contribute to what is presently recorded in the literature. An extensive list of references to the work of other researchers is also provided. This paper starts off with a mathematical description of the inverse problem and proceeds to discuss the two main categories of methods which were developed to solve the EEG inverse problem, mainly the non parametric and parametric methods. The main difference between the two is to whether a fixed number of dipoles is assumed a priori or not. Various techniques falling within these categories are described including minimum norm estimates and their generalizations, LORETA, sLORETA, VARETA, S-MAP, ST-MAP, Backus-Gilbert, LAURA, Shrinking LORETA FOCUSS (SLF, SSLOFO and ALF for non parametric methods and beamforming techniques, BESA, subspace techniques such as MUSIC and methods derived from it, FINES, simulated annealing and computational intelligence algorithms for parametric methods. From a review of the performance of these techniques as documented in the literature, one could conclude that in most cases the LORETA solution gives satisfactory results. In situations involving clusters of dipoles, higher resolution algorithms such as MUSIC or FINES are however preferred. Imposing reliable biophysical and psychological constraints, as done by LAURA has given superior results. The Monte-Carlo analysis performed, comparing WMN, LORETA, sLORETA and SLF
International Nuclear Information System (INIS)
Hillert, Wolfgang; Aurand, Bastian; Wittschen, Juergen
2009-01-01
Part of the future polarization program performed at the Bonn accelerator facility ELSA will rely on precision Compton polarimetry of the stored transversely polarized electron beam. Precise and fast polarimetry poses high demands on the light source and the detector which were studied in detail performing numerical simulations of the Compton scattering process. In order to experimentally verify these calculations, first measurements were carried out using an argon ion laser as light source and a prototype version of a counting silicon microstrip detector. Calculated and measured intensity profiles of backscattered photons are presented and compared, showing excellent agreement. Background originating from beam gas radiation turned out to be the major limitation of the polarimeter performance. In order to improve the situation, a new polarimeter was constructed and is currently being set up. Design and expected performance of this polarimeter upgrade are presented.
International Nuclear Information System (INIS)
Botto, D.J.; Pratt, R.H.
1979-05-01
The current status of Compton scattering, both experimental observations and the theoretical predictions, is examined. Classes of experiments are distinguished and the results obtained are summarized. The validity of the incoherent scattering function approximation and the impulse approximation is discussed. These simple theoretical approaches are compared with predictions of the nonrelativistic dipole formula of Gavrila and with the relativistic results of Whittingham. It is noted that the A -2 based approximations fail to predict resonances and an infrared divergence, both of which have been observed. It appears that at present the various available theoretical approaches differ significantly in their predictions and that further and more systematic work is required
Energy Technology Data Exchange (ETDEWEB)
Botto, D.J.; Pratt, R.H.
1979-05-01
The current status of Compton scattering, both experimental observations and the theoretical predictions, is examined. Classes of experiments are distinguished and the results obtained are summarized. The validity of the incoherent scattering function approximation and the impulse approximation is discussed. These simple theoretical approaches are compared with predictions of the nonrelativistic dipole formula of Gavrila and with the relativistic results of Whittingham. It is noted that the A/sup -2/ based approximations fail to predict resonances and an infrared divergence, both of which have been observed. It appears that at present the various available theoretical approaches differ significantly in their predictions and that further and more systematic work is required.
Liu, Lu
2017-08-17
This paper presents a workflow for near-surface velocity automatic estimation using the early arrivals of seismic data. This workflow comprises two methods, source-domain full traveltime inversion (FTI) and early-arrival waveform inversion. Source-domain FTI is capable of automatically generating a background velocity that can kinematically match the reconstructed plane-wave sources of early arrivals with true plane-wave sources. This method does not require picking first arrivals for inversion, which is one of the most challenging aspects of ray-based first-arrival tomographic inversion. Moreover, compared with conventional Born-based methods, source-domain FTI can distinguish between slower or faster initial model errors via providing the correct sign of the model gradient. In addition, this method does not need estimation of the source wavelet, which is a requirement for receiver-domain wave-equation velocity inversion. The model derived from source-domain FTI is then used as input to early-arrival waveform inversion to obtain the short-wavelength velocity components. We have tested the workflow on synthetic and field seismic data sets. The results show source-domain FTI can generate reasonable background velocities for early-arrival waveform inversion even when subsurface velocity reversals are present and the workflow can produce a high-resolution near-surface velocity model.
Inverse kinetics for subcritical systems with external neutron source
International Nuclear Information System (INIS)
Carvalho Gonçalves, Wemerson de; Martinez, Aquilino Senra; Carvalho da Silva, Fernando
2017-01-01
Highlights: • It was developed formalism for reactivity calculation. • The importance function is related to the system subcriticality. • The importance function is also related with the value of the external source. • The equations were analyzed for seven different levels of sub criticality. • The results are physically consistent with others formalism discussed in the paper. - Abstract: Nuclear reactor reactivity is one of the most important properties since it is directly related to the reactor control during the power operation. This reactivity is influenced by the neutron behavior in the reactor core. The time-dependent neutrons behavior in response to any change in material composition is important for the reactor operation safety. Transient changes may occur during the reactor startup or shutdown and due to accidental disturbances of the reactor operation. Therefore, it is very important to predict the time-dependent neutron behavior population induced by changes in neutron multiplication. Reactivity determination in subcritical systems driven by an external neutron source can be obtained through the solution of the inverse kinetics equation for subcritical nuclear reactors. The main purpose of this paper is to find the solution of the inverse kinetics equation the main purpose of this paper is to device the inverse kinetics equations for subcritical systems based in a previous paper published by the authors (Gonçalves et al., 2015) and by (Gandini and Salvatores, 2002; Dulla et al., 2006). The solutions of those equations were also obtained. Formulations presented in this paper were tested for seven different values of k eff with external neutrons source constant in time and for a powers ratio varying exponentially over time.
Inversion of Atmospheric Tracer Measurements, Localization of Sources
Issartel, J.-P.; Cabrit, B.; Hourdin, F.; Idelkadi, A.
When abnormal concentrations of a pollutant are observed in the atmosphere, the question of its origin arises immediately. The radioactivity from Tchernobyl was de- tected in Sweden before the accident was announced. This situation emphasizes the psychological, political and medical stakes of a rapid identification of sources. In tech- nical terms, most industrial sources can be modeled as a fixed point at ground level with undetermined duration. The classical method of identification involves the cal- culation of a backtrajectory departing from the detector with an upstream integration of the wind field. We were first involved in such questions as we evaluated the ef- ficiency of the international monitoring network planned in the frame of the Com- prehensive Test Ban Treaty. We propose a new approach of backtracking based upon the use of retroplumes associated to available measurements. Firstly the retroplume is related to inverse transport processes, describing quantitatively how the air in a sam- ple originates from regions that are all the more extended and diffuse as we go back far in the past. Secondly it clarifies the sensibility of the measurement with respect to all potential sources. It is therefore calculated by adjoint equations including of course diffusive processes. Thirdly, the statistical interpretation, valid as far as sin- gle particles are concerned, should not be used to investigate the position and date of a macroscopic source. In that case, the retroplume rather induces a straightforward constraint between the intensity of the source and its position. When more than one measurements are available, including zero valued measurements, the source satisfies the same number of linear relations tightly related to the retroplumes. This system of linear relations can be handled through the simplex algorithm in order to make the above intensity-position correlation more restrictive. This method enables to manage in a quantitative manner the
Source localization in electromyography using the inverse potential problem
van den Doel, Kees; Ascher, Uri M.; Pai, Dinesh K.
2011-02-01
We describe an efficient method for reconstructing the activity in human muscles from an array of voltage sensors on the skin surface. MRI is used to obtain morphometric data which are segmented into muscle tissue, fat, bone and skin, from which a finite element model for volume conduction is constructed. The inverse problem of finding the current sources in the muscles is solved using a careful regularization technique which adds a priori information, yielding physically reasonable solutions from among those that satisfy the basic potential problem. Several regularization functionals are considered and numerical experiments on a 2D test model are performed to determine which performs best. The resulting scheme leads to numerical difficulties when applied to large-scale 3D problems. We clarify the nature of these difficulties and provide a method to overcome them, which is shown to perform well in the large-scale problem setting.
Source localization in electromyography using the inverse potential problem
International Nuclear Information System (INIS)
Van den Doel, Kees; Ascher, Uri M; Pai, Dinesh K
2011-01-01
We describe an efficient method for reconstructing the activity in human muscles from an array of voltage sensors on the skin surface. MRI is used to obtain morphometric data which are segmented into muscle tissue, fat, bone and skin, from which a finite element model for volume conduction is constructed. The inverse problem of finding the current sources in the muscles is solved using a careful regularization technique which adds a priori information, yielding physically reasonable solutions from among those that satisfy the basic potential problem. Several regularization functionals are considered and numerical experiments on a 2D test model are performed to determine which performs best. The resulting scheme leads to numerical difficulties when applied to large-scale 3D problems. We clarify the nature of these difficulties and provide a method to overcome them, which is shown to perform well in the large-scale problem setting
Direct and inverse source problems for a space fractional advection dispersion equation
Aldoghaither, Abeer; Laleg-Kirati, Taous-Meriem; Liu, Da Yan
2016-01-01
In this paper, direct and inverse problems for a space fractional advection dispersion equation on a finite domain are studied. The inverse problem consists in determining the source term from final observations. We first derive the analytic
Energy Technology Data Exchange (ETDEWEB)
Antoniassi, M.; Conceicao, A.L.C. [Departamento de Fisica-Faculdade de Filosofia Ciencias e Letras de Ribeirao Preto, Universidade de Sao Paulo, Ribeirao Preto, 14040-901 Sao Paulo (Brazil); Poletti, M.E., E-mail: poletti@ffclrp.usp.br [Departamento de Fisica-Faculdade de Filosofia Ciencias e Letras de Ribeirao Preto, Universidade de Sao Paulo, Ribeirao Preto, 14040-901 Sao Paulo (Brazil)
2012-07-15
In this work we measured X-ray scatter spectra from normal and neoplastic breast tissues using photon energy of 17.44 keV and a scattering angle of 90 Degree-Sign , in order to study the shape (FWHM) of the Compton peaks. The obtained results for FWHM were discussed in terms of composition and histological characteristics of each tissue type. The statistical analysis shows that the distribution of FWHM of normal adipose breast tissue clearly differs from all other investigated tissues. Comparison between experimental values of FWHM and effective atomic number revealed a strong correlation between them, showing that the FWHM values can be used to provide information about elemental composition of the tissues. - Highlights: Black-Right-Pointing-Pointer X-ray scatter spectra from normal and neoplastic breast tissues were measured. Black-Right-Pointing-Pointer Shape (FWHM) of Compton peak was related with elemental composition and characteristics of each tissue type. Black-Right-Pointing-Pointer A statistical hypothesis test showed clear differences between normal and neoplastic breast tissues. Black-Right-Pointing-Pointer There is a strong correlation between experimental values of FWHM and effective atomic number. Black-Right-Pointing-Pointer Shape (FWHM) of Compton peak can be used to provide information about elemental composition of the tissues.
Tornga, Shawn R.
The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as
Brown, Malcolm
2009-01-01
Inversions are fascinating phenomena. They are reversals of the normal or expected order. They occur across a wide variety of contexts. What do inversions have to do with learning spaces? The author suggests that they are a useful metaphor for the process that is unfolding in higher education with respect to education. On the basis of…
International Nuclear Information System (INIS)
Okuyama, Shinichi; Sera, Koichiro; Fukuda, Hiroshi; Shishido, Fumio; Mishina, Hitoshi.
1977-01-01
Compton radiography, a tomographic technic with Compton-scattered rays of a monochromatic gamma ray beam, was feasible of tomographing a chest phantom. The result suggested that the technic could be extended to imaging of the lung and the surrounding structures of the chest wall, mediastinum and liver in Compton tomographic mode. (auth.)
Direct and inverse source problems for a space fractional advection dispersion equation
Aldoghaither, Abeer
2016-05-15
In this paper, direct and inverse problems for a space fractional advection dispersion equation on a finite domain are studied. The inverse problem consists in determining the source term from final observations. We first derive the analytic solution to the direct problem which we use to prove the uniqueness and the unstability of the inverse source problem using final measurements. Finally, we illustrate the results with a numerical example.
A new optimization approach for source-encoding full-waveform inversion
Moghaddam, P.P.; Keers, H.; Herrmann, F.J.; Mulder, W.A.
2013-01-01
Waveform inversion is the method of choice for determining a highly heterogeneous subsurface structure. However, conventional waveform inversion requires that the wavefield for each source is computed separately. This makes it very expensive for realistic 3D seismic surveys. Source-encoding waveform
Multi-source waveform inversion of marine streamer data using the normalized wavefield
Choi, Yun Seok; Alkhalifah, Tariq Ali
2012-01-01
Even though the encoded multi-source approach dramatically reduces the computational cost of waveform inversion, it is generally not applicable to marine streamer data. This is because the simultaneous-sources modeled data cannot be muted to comply
Compton radiography, 4. Magnification compton radiography
Energy Technology Data Exchange (ETDEWEB)
Okuyama, S; Sera, K; Shishido, F; Fukuda, H [Tohoku Univ., Sendai (Japan). Research Inst. for Tuberculosis and Cancer; Mishina, H
1978-03-01
Compton radiography permits an acquisition of direct magnification Compton radiograms by use of a pinhole collimator, rendering it feasible to overcome the resolution of the scinticamera being employed. An improvement of resolution was attained from 7 mm to 1 mm separation. Usefulness of its clinical application can be seen in orientation of puncture and biopsy in deep structures and detection of various foreign bodies penetrated by blasts and so on under the ''magnification Compton fluoroscopy'' which can be developed on this principle in the near future.
Choi, Yun Seok
2011-09-01
Full waveform inversion requires a good estimation of the source wavelet to improve our chances of a successful inversion. This is especially true for an encoded multisource time-domain implementation, which, conventionally, requires separate-source modeling, as well as the Fourier transform of wavefields. As an alternative, we have developed the source-independent time-domain waveform inversion using convolved wavefields. Specifically, the misfit function consists of the convolution of the observed wavefields with a reference trace from the modeled wavefield, plus the convolution of the modeled wavefields with a reference trace from the observed wavefield. In this case, the source wavelet of the observed and the modeled wavefields are equally convolved with both terms in the misfit function, and thus, the effects of the source wavelets are eliminated. Furthermore, because the modeled wavefields play a role of low-pass filtering, the observed wavefields in the misfit function, the frequency-selection strategy from low to high can be easily adopted just by setting the maximum frequency of the source wavelet of the modeled wavefields; and thus, no filtering is required. The gradient of the misfit function is computed by back-propagating the new residual seismograms and applying the imaging condition, similar to reverse-time migration. In the synthetic data evaluations, our waveform inversion yields inverted models that are close to the true model, but demonstrates, as predicted, some limitations when random noise is added to the synthetic data. We also realized that an average of traces is a better choice for the reference trace than using a single trace. © 2011 Society of Exploration Geophysicists.
Application of the unwrapped phase inversion to land data without source estimation
Choi, Yun Seok; Alkhalifah, Tariq Ali; DeVault, Bryan
2015-01-01
and the source wavelet are updated simultaneously and interact with each other. We suggest a source-independent unwrapped phase inversion approach instead of relying on source-estimation from this land data. In the source-independent approach, the phase
Rapid kinematic finite source inversion for Tsunamic Early Warning using high rate GNSS data
Chen, K.; Liu, Z.; Song, Y. T.
2017-12-01
Recently, Global Navigation Satellite System (GNSS) has been used for rapid earthquake source inversion towards tsunami early warning. In practice, two approaches, i.e., static finite source inversion based on permanent co-seismic offsets and kinematic finite source inversion using high-rate (>= 1 Hz) co-seismic displacement waveforms, are often employed to fulfill the task. The static inversion is relatively easy to be implemented and does not require additional constraints on rupture velocity, duration, and temporal variation. However, since most GNSS receivers are deployed onshore locating on one side of the subduction fault, there is very limited resolution on near-trench fault slip using GNSS in static finite source inversion. On the other hand, the high-rate GNSS displacement waveforms, which contain the timing information of earthquake rupture explicitly and static offsets implicitly, have the potential to improve near-trench resolution by reconciling with the depth-dependent megathrust rupture behaviors. In this contribution, we assess the performance of rapid kinematic finite source inversion using high-rate GNSS by three selected historical tsunamigenic cases: the 2010 Mentawai, 2011 Tohoku and 2015 Illapel events. With respect to the 2010 Mentawai case, it is a typical tsunami earthquake with most slip concentrating near the trench. The static inversion has little resolution there and incorrectly puts slip at greater depth (>10km). In contrast, the recorded GNSS displacement waveforms are deficit in high-frequency energy, the kinematic source inversion recovers a shallow slip patch (depth less than 6 km) and tsunami runups are predicted quite reasonably. For the other two events, slip from kinematic and static inversion show similar characteristics and comparable tsunami scenarios, which may be related to dense GNSS network and behavior of the rupture. Acknowledging the complexity of kinematic source inversion in real-time, we adopt the back
Application of the unwrapped phase inversion to land data without source estimation
Choi, Yun Seok
2015-08-19
Unwrapped phase inversion with a strong damping was developed to solve the phase wrapping problem in frequency-domain waveform inversion. In this study, we apply the unwrapped phase inversion to band-limited real land data, for which the available minimum frequency is quite high. An important issue of the data is a strong ambiguity of source-ignition time (or source shift) shown in a seismogram. A source-estimation approach does not fully address the issue of source shift, since the velocity model and the source wavelet are updated simultaneously and interact with each other. We suggest a source-independent unwrapped phase inversion approach instead of relying on source-estimation from this land data. In the source-independent approach, the phase of the modeled data converges not to the exact phase value of the observed data, but to the relative phase value (or the trend of phases); thus it has the potential to solve the ambiguity of source-ignition time in a seismogram and work better than the source-estimation approach. Numerical examples show the validation of the source-independent unwrapped phase inversion, especially for land field data having an ambiguity in the source-ignition time.
Kitaki, Takaaki; Mineshige, Shin; Ohsuga, Ken; Kawashima, Tomohisa
2017-12-01
X-ray continuum spectra of super-Eddington accretion flow are studied by means of Monte Carlo radiative transfer simulations based on the radiation hydrodynamic simulation data, in which both thermal- and bulk-Compton scatterings are taken into account. We compare the calculated spectra of accretion flow around black holes with masses of MBH = 10, 102, 103, and 104 M⊙ for a fixed mass injection rate (from the computational boundary at 103 rs) of 103 LEdd/c2 (with rs, LEdd, and c being the Schwarzschild radius, the Eddington luminosity, and the speed of light, respectively). The soft X-ray spectra exhibit mass dependence in accordance with the standard-disk relation; the maximum surface temperature is scaled as T ∝ M_{ BH}^{ -1/4}. The spectra in the hard X-ray band, by contrast with soft X-ray, look to be quite similar among different models, if we normalize the radiation luminosity by MBH. This reflects that the hard component is created by thermal- and bulk-Compton scatterings of soft photons originating from an accretion flow in the overheated and/or funnel regions, the temperatures of which have no dependence on mass. The hard X-ray spectra can be reproduced by a Wien spectrum with the temperature of T ˜ 3 keV accompanied by a hard excess at photon energy above several keV. The excess spectrum can be fitted well with a power law with a photon index of Γ ˜ 3. This feature is in good agreement with that of the recent NuSTAR observations of ULXs (ultra-luminous X-ray sources).
SU-G-IeP3-10: Molecular Imaging with Clinical X-Ray Sources and Compton Cameras
International Nuclear Information System (INIS)
Vernekohl, D; Ahmad, M; Chinn, G; Xing, L
2016-01-01
Purpose: The application of Compton cameras (CC) is a novel approach translating XFCT to a practical modality realized with clinical CT systems without the restriction of pencil beams. The dual modality design offers additional information without extra patient dose. The purpose of this work is to investigate the feasibility and efficacy of using CCs for volumetric x-ray fluorescence (XF) imaging by Monte Carlo (MC) simulations and statistical image reconstruction. Methods: The feasibility of a CC for imaging x-ray fluorescence emitted from targeted lesions is examined by MC simulations. 3 mm diameter water spheres with various gold concentrations and detector distances are placed inside the lung of an adult human phantom (MIRD) and are irradiated with both fan and cone-beam geometries. A sandwich design CC composed of Silicon and CdTe is used to image the gold nanoparticle distribution. The detection system comprises four 16×26 cm"2 detector panels placed on the chest of a MIRD phantom. Constraints of energy-, spatial-resolution, clinical geometries and Doppler broadening are taken into account. Image reconstruction is performed with a list-mode MLEM algorithm with cone-projector on a GPU. Results: The comparison of reconstruction of cone- and fan-beam excitation shows that the spatial resolution is improved by 23% for fan-beams with significantly decreased processing time. Cone-beam excitation increases scatter content disturbing quantification of lesions near the body surface. Spatial resolution and detectability limit in the center of the lung is 8.7 mm and 20 fM for 50 nm diameter gold nanoparticles at 20 mGy. Conclusion: The implementation of XFCT with a CC is a feasible method for molecular imaging with high atomic number probes. Given constrains of detector resolutions, Doppler broadening, and limited exposure dose, spatial resolutions comparable with PET and molecular sensitivities in the fM range are realizable with current detector technology.
SU-G-IeP3-10: Molecular Imaging with Clinical X-Ray Sources and Compton Cameras
Energy Technology Data Exchange (ETDEWEB)
Vernekohl, D; Ahmad, M; Chinn, G; Xing, L [Stanford University, Stanford, CA (United States)
2016-06-15
Purpose: The application of Compton cameras (CC) is a novel approach translating XFCT to a practical modality realized with clinical CT systems without the restriction of pencil beams. The dual modality design offers additional information without extra patient dose. The purpose of this work is to investigate the feasibility and efficacy of using CCs for volumetric x-ray fluorescence (XF) imaging by Monte Carlo (MC) simulations and statistical image reconstruction. Methods: The feasibility of a CC for imaging x-ray fluorescence emitted from targeted lesions is examined by MC simulations. 3 mm diameter water spheres with various gold concentrations and detector distances are placed inside the lung of an adult human phantom (MIRD) and are irradiated with both fan and cone-beam geometries. A sandwich design CC composed of Silicon and CdTe is used to image the gold nanoparticle distribution. The detection system comprises four 16×26 cm{sup 2} detector panels placed on the chest of a MIRD phantom. Constraints of energy-, spatial-resolution, clinical geometries and Doppler broadening are taken into account. Image reconstruction is performed with a list-mode MLEM algorithm with cone-projector on a GPU. Results: The comparison of reconstruction of cone- and fan-beam excitation shows that the spatial resolution is improved by 23% for fan-beams with significantly decreased processing time. Cone-beam excitation increases scatter content disturbing quantification of lesions near the body surface. Spatial resolution and detectability limit in the center of the lung is 8.7 mm and 20 fM for 50 nm diameter gold nanoparticles at 20 mGy. Conclusion: The implementation of XFCT with a CC is a feasible method for molecular imaging with high atomic number probes. Given constrains of detector resolutions, Doppler broadening, and limited exposure dose, spatial resolutions comparable with PET and molecular sensitivities in the fM range are realizable with current detector technology.
Sound source reconstruction using inverse boundary element calculations
DEFF Research Database (Denmark)
Schuhmacher, Andreas; Hald, Jørgen; Rasmussen, Karsten Bo
2003-01-01
Whereas standard boundary element calculations focus on the forward problem of computing the radiated acoustic field from a vibrating structure, the aim in this work is to reverse the process, i.e., to determine vibration from acoustic field data. This inverse problem is brought on a form suited ...... it is demonstrated that the L-curve criterion is robust with respect to the errors in a real measurement situation. In particular, it is shown that the L-curve criterion is superior to the more conventional generalized cross-validation (GCV) approach for the present tire noise studies....
Choi, Yun Seok; Alkhalifah, Tariq Ali
2012-01-01
Conventional multi-source waveform inversion using an objective function based on the least-square misfit cannot be applied to marine streamer acquisition data because of inconsistent acquisition geometries between observed and modelled data
Micro-seismic imaging using a source function independent full waveform inversion method
Wang, Hanchen; Alkhalifah, Tariq Ali
2018-01-01
hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI
Frequency-domain elastic full waveform inversion using encoded simultaneous sources
Jeong, W.; Son, W.; Pyun, S.; Min, D.
2011-12-01
Currently, numerous studies have endeavored to develop robust full waveform inversion and migration algorithms. These processes require enormous computational costs, because of the number of sources in the survey. To avoid this problem, the phase encoding technique for prestack migration was proposed by Romero (2000) and Krebs et al. (2009) proposed the encoded simultaneous-source inversion technique in the time domain. On the other hand, Ben-Hadj-Ali et al. (2011) demonstrated the robustness of the frequency-domain full waveform inversion with simultaneous sources for noisy data changing the source assembling. Although several studies on simultaneous-source inversion tried to estimate P- wave velocity based on the acoustic wave equation, seismic migration and waveform inversion based on the elastic wave equations are required to obtain more reliable subsurface information. In this study, we propose a 2-D frequency-domain elastic full waveform inversion technique using phase encoding methods. In our algorithm, the random phase encoding method is employed to calculate the gradients of the elastic parameters, source signature estimation and the diagonal entries of approximate Hessian matrix. The crosstalk for the estimated source signature and the diagonal entries of approximate Hessian matrix are suppressed with iteration as for the gradients. Our 2-D frequency-domain elastic waveform inversion algorithm is composed using the back-propagation technique and the conjugate-gradient method. Source signature is estimated using the full Newton method. We compare the simultaneous-source inversion with the conventional waveform inversion for synthetic data sets of the Marmousi-2 model. The inverted results obtained by simultaneous sources are comparable to those obtained by individual sources, and source signature is successfully estimated in simultaneous source technique. Comparing the inverted results using the pseudo Hessian matrix with previous inversion results
Double point source W-phase inversion: Real-time implementation and automated model selection
Nealy, Jennifer; Hayes, Gavin
2015-01-01
Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.
Zielke, O.; McDougall, D.; Mai, P. M.; Babuska, I.
2014-12-01
One fundamental aspect of seismic hazard mitigation is gaining a better understanding of the rupture process. Because direct observation of the relevant parameters and properties is not possible, other means such as kinematic source inversions are used instead. By constraining the spatial and temporal evolution of fault slip during an earthquake, those inversion approaches may enable valuable insights in the physics of the rupture process. However, due to the underdetermined nature of this inversion problem (i.e., inverting a kinematic source model for an extended fault based on seismic data), the provided solutions are generally non-unique. Here we present a statistical (Bayesian) inversion approach based on an open-source library for uncertainty quantification (UQ) called QUESO that was developed at ICES (UT Austin). The approach has advantages with respect to deterministic inversion approaches as it provides not only a single (non-unique) solution but also provides uncertainty bounds with it. Those uncertainty bounds help to qualitatively and quantitatively judge how well constrained an inversion solution is and how much rupture complexity the data reliably resolve. The presented inversion scheme uses only tele-seismically recorded body waves but future developments may lead us towards joint inversion schemes. After giving an insight in the inversion scheme ifself (based on delayed rejection adaptive metropolis, DRAM) we explore the method's resolution potential. For that, we synthetically generate tele-seismic data, add for example different levels of noise and/or change fault plane parameterization and then apply our inversion scheme in the attempt to extract the (known) kinematic rupture model. We conclude with exemplary inverting real tele-seismic data of a recent large earthquake and compare those results with deterministically derived kinematic source models provided by other research groups.
A New Wave Equation Based Source Location Method with Full-waveform Inversion
Wu, Zedong
2017-05-26
Locating the source of a passively recorded seismic event is still a challenging problem, especially when the velocity is unknown. Many imaging approaches to focus the image do not address the velocity issue and result in images plagued with illumination artifacts. We develop a waveform inversion approach with an additional penalty term in the objective function to reward the focusing of the source image. This penalty term is relaxed early to allow for data fitting, and avoid cycle skipping, using an extended source. At the later stages the focusing of the image dominates the inversion allowing for high resolution source and velocity inversion. We also compute the source location explicitly and numerical tests show that we obtain good estimates of the source locations with this approach.
Testing special relativity theory using Compton scattering
International Nuclear Information System (INIS)
Contreras S, H.; Hernandez A, L.; Baltazar R, A.; Escareno J, E.; Mares E, C. A.; Hernandez V, C.; Vega C, H. R.
2010-10-01
The validity of the special relativity theory has been tested using the Compton scattering. Since 1905 several experiments has been carried out to show that time, mass, and length change with the velocity, in this work the Compton scattering has been utilized as a simple way to show the validity to relativity. The work was carried out through Monte Carlo calculations and experiments with different gamma-ray sources and a gamma-ray spectrometer with a 3 x 3 NaI (Tl) detector. The pulse-height spectra were collected and the Compton edge was observed. This information was utilized to determine the relationship between the electron's mass and energy using the Compton -knee- position, the obtained results were contrasted with two collision models between photon and electron, one model was built using the classical physics and another using the special relativity theory. It was found that calculations and experiments results fit to collision model made using the special relativity. (Author)
An inverse source location algorithm for radiation portal monitor applications
International Nuclear Information System (INIS)
Miller, Karen A.; Charlton, William S.
2010-01-01
Radiation portal monitors are being deployed at border crossings throughout the world to prevent the smuggling of nuclear and radiological materials; however, a tension exists between security and the free-flow of commerce. Delays at ports-of-entry have major economic implications, so it is imperative to minimize portal monitor screening time. We have developed an algorithm to locate a radioactive source using a distributed array of detectors, specifically for use at border crossings. To locate the source, we formulated an optimization problem where the objective function describes the least-squares difference between the actual and predicted detector measurements. The predicted measurements are calculated by solving the 3-D deterministic neutron transport equation given an estimated source position. The source position is updated using the steepest descent method, where the gradient of the objective function with respect to the source position is calculated using adjoint transport calculations. If the objective function is smaller than the convergence criterion, then the source position has been identified. This paper presents the derivation of the underlying equations in the algorithm as well as several computational test cases used to characterize its accuracy.
Inverse random source scattering for the Helmholtz equation in inhomogeneous media
Li, Ming; Chen, Chuchu; Li, Peijun
2018-01-01
This paper is concerned with an inverse random source scattering problem in an inhomogeneous background medium. The wave propagation is modeled by the stochastic Helmholtz equation with the source driven by additive white noise. The goal is to reconstruct the statistical properties of the random source such as the mean and variance from the boundary measurement of the radiated random wave field at multiple frequencies. Both the direct and inverse problems are considered. We show that the direct problem has a unique mild solution by a constructive proof. For the inverse problem, we derive Fredholm integral equations, which connect the boundary measurement of the radiated wave field with the unknown source function. A regularized block Kaczmarz method is developed to solve the ill-posed integral equations. Numerical experiments are included to demonstrate the effectiveness of the proposed method.
On an inverse source problem for enhanced oil recovery by wave motion maximization in reservoirs
Karve, Pranav M.; Kucukcoban, Sezgin; Kallivokas, Loukas F.
2014-01-01
to increase the mobility of otherwise entrapped oil. The goal is to arrive at the spatial and temporal description of surface sources that are capable of maximizing mobility in the target reservoir. The focusing problem is posed as an inverse source problem
Optimization method for identifying the source term in an inverse wave equation
Directory of Open Access Journals (Sweden)
Arumugam Deiveegan
2017-08-01
Full Text Available In this work, we investigate the inverse problem of identifying a space-wise dependent source term of wave equation from the measurement on the boundary. On the basis of the optimal control framework, the inverse problem is transformed into an optimization problem. The existence and necessary condition of the minimizer for the cost functional are obtained. The projected gradient method and two-parameter model function method are applied to the minimization problem and numerical results are illustrated.
On rational approximation methods for inverse source problems
Rundell, William
2011-02-01
The basis of most imaging methods is to detect hidden obstacles or inclusions within a body when one can only make measurements on an exterior surface. Such is the ubiquity of these problems, the underlying model can lead to a partial differential equation of any of the major types, but here we focus on the case of steady-state electrostatic or thermal imaging and consider boundary value problems for Laplace\\'s equation. Our inclusions are interior forces with compact support and our data consists of a single measurement of (say) voltage/current or temperature/heat flux on the external boundary. We propose an algorithm that under certain assumptions allows for the determination of the support set of these forces by solving a simpler "equivalent point source" problem, and which uses a Newton scheme to improve the corresponding initial approximation. © 2011 American Institute of Mathematical Sciences.
On rational approximation methods for inverse source problems
Rundell, William; Hanke, Martin
2011-01-01
The basis of most imaging methods is to detect hidden obstacles or inclusions within a body when one can only make measurements on an exterior surface. Such is the ubiquity of these problems, the underlying model can lead to a partial differential equation of any of the major types, but here we focus on the case of steady-state electrostatic or thermal imaging and consider boundary value problems for Laplace's equation. Our inclusions are interior forces with compact support and our data consists of a single measurement of (say) voltage/current or temperature/heat flux on the external boundary. We propose an algorithm that under certain assumptions allows for the determination of the support set of these forces by solving a simpler "equivalent point source" problem, and which uses a Newton scheme to improve the corresponding initial approximation. © 2011 American Institute of Mathematical Sciences.
International Nuclear Information System (INIS)
Annuar, A.; Gandhi, P.; Alexander, D. M.; Lansbury, G. B.; Moro, A. Del; Arévalo, P.; Ballantyne, D. R.; Baloković, M.; Brightman, M.; Harrison, F. A.; Bauer, F. E.; Boggs, S. E.; Craig, W. W.; Brandt, W. N.; Christensen, F. E.; Hailey, C. J.; Hickox, R. C.; Matt, G.; Puccetti, S.; Ricci, C.
2015-01-01
We present two Nuclear Spectroscopic Telescope Array (NuSTAR) observations of the local Seyfert 2 active galactic nucleus (AGN) and an ultraluminous X-ray source (ULX) candidate in NGC 5643. Together with archival data from Chandra, XMM-Newton, and Swift-BAT, we perform a high-quality broadband spectral analysis of the AGN over two decades in energy (∼0.5–100 keV). Previous X-ray observations suggested that the AGN is obscured by a Compton-thick (CT) column of obscuring gas along our line of sight. However, the lack of high-quality ≳10 keV observations, together with the presence of a nearby X-ray luminous source, NGC 5643 X–1, have left significant uncertainties in the characterization of the nuclear spectrum. NuSTAR now enables the AGN and NGC 5643 X–1 to be separately resolved above 10 keV for the first time and allows a direct measurement of the absorbing column density toward the nucleus. The new data show that the nucleus is indeed obscured by a CT column of N H ≳ 5 × 10 24 cm −2 . The range of 2–10 keV absorption-corrected luminosity inferred from the best-fitting models is L 2–10,int = (0.8–1.7) × 10 42 erg s −1 , consistent with that predicted from multiwavelength intrinsic luminosity indicators. In addition, we also study the NuSTAR data for NGC 5643 X–1 and show that it exhibits evidence of a spectral cutoff at energy E ∼ 10 keV, similar to that seen in other ULXs observed by NuSTAR. Along with the evidence for significant X-ray luminosity variations in the 3–8 keV band from 2003 to 2014, our results further strengthen the ULX classification of NGC 5643 X–1
Annuar, A.; Gandhi, P.; Alexander, D. M.; Lansbury, G. B.; Arévalo, P.; Ballantyne, D. R.; Baloković, M.; Bauer, F. E.; Boggs, S. E.; Brandt, W. N.; Brightman, M.; Christensen, F. E.; Craig, W. W.; Del Moro, A.; Hailey, C. J.; Harrison, F. A.; Hickox, R. C.; Matt, G.; Puccetti, S.; Ricci, C.; Rigby, J. R.; Stern, D.; Walton, D. J.; Zappacosta, L.; Zhang, W.
2015-12-01
We present two Nuclear Spectroscopic Telescope Array (NuSTAR) observations of the local Seyfert 2 active galactic nucleus (AGN) and an ultraluminous X-ray source (ULX) candidate in NGC 5643. Together with archival data from Chandra, XMM-Newton, and Swift-BAT, we perform a high-quality broadband spectral analysis of the AGN over two decades in energy (˜0.5-100 keV). Previous X-ray observations suggested that the AGN is obscured by a Compton-thick (CT) column of obscuring gas along our line of sight. However, the lack of high-quality ≳10 keV observations, together with the presence of a nearby X-ray luminous source, NGC 5643 X-1, have left significant uncertainties in the characterization of the nuclear spectrum. NuSTAR now enables the AGN and NGC 5643 X-1 to be separately resolved above 10 keV for the first time and allows a direct measurement of the absorbing column density toward the nucleus. The new data show that the nucleus is indeed obscured by a CT column of NH ≳ 5 × 1024 cm-2. The range of 2-10 keV absorption-corrected luminosity inferred from the best-fitting models is L2-10,int = (0.8-1.7) × 1042 erg s-1, consistent with that predicted from multiwavelength intrinsic luminosity indicators. In addition, we also study the NuSTAR data for NGC 5643 X-1 and show that it exhibits evidence of a spectral cutoff at energy E ˜ 10 keV, similar to that seen in other ULXs observed by NuSTAR. Along with the evidence for significant X-ray luminosity variations in the 3-8 keV band from 2003 to 2014, our results further strengthen the ULX classification of NGC 5643 X-1.
Tangborn, Andrew; Cooper, Robert; Pawson, Steven; Sun, Zhibin
2009-01-01
We present a source inversion technique for chemical constituents that uses assimilated constituent observations rather than directly using the observations. The method is tested with a simple model problem, which is a two-dimensional Fourier-Galerkin transport model combined with a Kalman filter for data assimilation. Inversion is carried out using a Green's function method and observations are simulated from a true state with added Gaussian noise. The forecast state uses the same spectral spectral model, but differs by an unbiased Gaussian model error, and emissions models with constant errors. The numerical experiments employ both simulated in situ and satellite observation networks. Source inversion was carried out by either direct use of synthetically generated observations with added noise, or by first assimilating the observations and using the analyses to extract observations. We have conducted 20 identical twin experiments for each set of source and observation configurations, and find that in the limiting cases of a very few localized observations, or an extremely large observation network there is little advantage to carrying out assimilation first. However, in intermediate observation densities, there decreases in source inversion error standard deviation using the Kalman filter algorithm followed by Green's function inversion by 50% to 95%.
Choi, Yun Seok
2012-05-02
Conventional multi-source waveform inversion using an objective function based on the least-square misfit cannot be applied to marine streamer acquisition data because of inconsistent acquisition geometries between observed and modelled data. To apply the multi-source waveform inversion to marine streamer data, we use the global correlation between observed and modelled data as an alternative objective function. The new residual seismogram derived from the global correlation norm attenuates modelled data not supported by the configuration of observed data and thus, can be applied to multi-source waveform inversion of marine streamer data. We also show that the global correlation norm is theoretically the same as the least-square norm of the normalized wavefield. To efficiently calculate the gradient, our method employs a back-propagation algorithm similar to reverse-time migration based on the adjoint-state of the wave equation. In numerical examples, the multi-source waveform inversion using the global correlation norm results in better inversion results for marine streamer acquisition data than the conventional approach. © 2012 European Association of Geoscientists & Engineers.
An inverse source problem of the Poisson equation with Cauchy data
Directory of Open Access Journals (Sweden)
Ji-Chuan Liu
2017-05-01
Full Text Available In this article, we study an inverse source problem of the Poisson equation with Cauchy data. We want to find iterative algorithms to detect the hidden source within a body from measurements on the boundary. Our goal is to reconstruct the location, the size and the shape of the hidden source. This problem is ill-posed, regularization techniques should be employed to obtain the regularized solution. Numerical examples show that our proposed algorithms are valid and effective.
International Nuclear Information System (INIS)
Ferri, Julien
2016-01-01
An ultra-short and ultra-intense laser pulse propagating in a low-density gas can accelerate in its wake a part of the electrons ionized from the gas to relativistic energies of a few hundreds of MeV over distances of a few millimeters only. During their acceleration, as a consequence of their transverse motion, these electrons emit strongly collimated X-rays in the forward direction, which are called betatron radiations. The characteristics of this source turn it into an interesting tool for high-resolution imagery.In this thesis, we explore three different axis to work on this source using simulations on the Particles-In-Cells codes CALDER and CALDER-Circ. We first study the creation of a betatron X-ray source with kilo-joule and pico-second laser pulses, for which duration and energy are then much higher than usual in this domain. In spite of the unusual laser parameters, we show that X-ray sources can still be generated, furthermore in two different regimes.In a second study, the generally observed discrepancies between experiments and simulations are investigated. We show that the use of realistic laser profiles instead of Gaussian ones in the simulations strongly degrades the performances of the laser-plasma accelerator and of the betatron source. Additionally, this leads to a better qualitative and quantitative agreement with the experiment. Finally, with the aim of improving the X-ray emission, we explore several techniques based on the manipulation of the plasma density profile used for acceleration. We find that both the use of a transverse gradient and of a density step increases the amplitude of the electrons transverse motions, and then increases the radiated energy. Alternatively, we show that this goal can also be achieved through the transition from a laser wakefield regime to a plasma wakefield regime induced by an increase of the density. The laser wakefield optimizes the electron acceleration whereas the plasma wakefield favours the X
Finite-fault source inversion using adjoint methods in 3D heterogeneous media
Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia
2018-04-01
Accounting for lateral heterogeneities in the 3D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3D heterogeneity in source inversion involves pre-computing 3D Green's functions, which requires a number of 3D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense datasets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3D heterogeneous velocity model. The velocity model comprises a uniform background and a 3D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3D velocity model are performed for two different station configurations, a dense and a sparse network with 1 km and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect
MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data
Key, Kerry
2016-10-01
This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data
Micro-seismic imaging using a source function independent full waveform inversion method
Wang, Hanchen; Alkhalifah, Tariq
2018-03-01
At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.
Micro-seismic imaging using a source function independent full waveform inversion method
Wang, Hanchen
2018-03-26
At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.
Compton Reflection in AGN with Simbol-X
Beckmann, V.; Courvoisier, T. J.-L.; Gehrels, N.; Lubiński, P.; Malzac, J.; Petrucci, P. O.; Shrader, C. R.; Soldi, S.
2009-05-01
AGN exhibit complex hard X-ray spectra. Our current understanding is that the emission is dominated by inverse Compton processes which take place in the corona above the accretion disk, and that absorption and reflection in a distant absorber play a major role. These processes can be directly observed through the shape of the continuum, the Compton reflection hump around 30 keV, and the iron fluorescence line at 6.4 keV. We demonstrate the capabilities of Simbol-X to constrain complex models for cases like MCG-05-23-016, NGC 4151, NGC 2110, and NGC 4051 in short (10 ksec) observations. We compare the simulations with recent observations on these sources by INTEGRAL, Swift and Suzaku. Constraining reflection models for AGN with Simbol-X will help us to get a clear view of the processes and geometry near to the central engine in AGN, and will give insight to which sources are responsible for the Cosmic X-ray background at energies >20 keV.
Venugopal, M.; Roy, D.; Rajendran, K.; Guillas, S.; Dias, F.
2017-01-01
Numerical inversions for earthquake source parameters from tsunami wave data usually incorporate subjective elements to stabilize the search. In addition, noisy and possibly insufficient data result in instability and non-uniqueness in most deterministic inversions, which are barely acknowledged. Here, we employ the satellite altimetry data for the 2004 Sumatra–Andaman tsunami event to invert the source parameters. We also include kinematic parameters that improve the description of tsunami generation and propagation, especially near the source. Using a finite fault model that represents the extent of rupture and the geometry of the trench, we perform a new type of nonlinear joint inversion of the slips, rupture velocities and rise times with minimal a priori constraints. Despite persistently good waveform fits, large uncertainties in the joint parameter distribution constitute a remarkable feature of the inversion. These uncertainties suggest that objective inversion strategies should incorporate more sophisticated physical models of seabed deformation in order to significantly improve the performance of early warning systems. PMID:28989311
Multi-source waveform inversion of marine streamer data using the normalized wavefield
Choi, Yun Seok
2012-01-01
Even though the encoded multi-source approach dramatically reduces the computational cost of waveform inversion, it is generally not applicable to marine streamer data. This is because the simultaneous-sources modeled data cannot be muted to comply with the configuration of the marine streamer data, which causes differences in the number of stacked-traces, or energy levels, between the modeled and observed data. Since the conventional L2 norm does not account for the difference in energy levels, multi-source inversion based on the conventional L2 norm does not work for marine streamer data. In this study, we propose the L2, approximated L2, and L1 norm using the normalized wavefields for the multi-source waveform inversion of marine streamer data. Since the normalized wavefields mitigate the different energy levels between the observed and modeled wavefields, the multi-source waveform inversion using the normalized wavefields can be applied to marine streamer data. We obtain the gradient of the objective functions using the back-propagation algorithm. To conclude, the gradient of the L2 norm using the normalized wavefields is exactly the same as that of the global correlation norm. In the numerical examples, the new objective functions using the normalized wavefields generate successful results whereas conventional L2 norm does not.
Microseismic imaging using a source-independent full-waveform inversion method
Wang, Hanchen
2016-09-06
Using full waveform inversion (FWI) to locate microseismic and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, waveform inversion of microseismic events faces incredible nonlinearity due to the unknown source location (space) and function (time). We develop a source independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with the observed and modeled data to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for source wavelet in z axis is extracted to check the accuracy of the inverted source image and velocity model. Also the angle gather is calculated to see if the velocity model is correct. By inverting for all the source image, source wavelet and the velocity model, the proposed method produces good estimates of the source location, ignition time and the background velocity for part of the SEG overthrust model.
Microseismic imaging using a source-independent full-waveform inversion method
Wang, Hanchen
2016-01-01
Using full waveform inversion (FWI) to locate microseismic and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, waveform inversion of microseismic events faces incredible nonlinearity due to the unknown source location (space) and function (time). We develop a source independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with the observed and modeled data to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for source wavelet in z axis is extracted to check the accuracy of the inverted source image and velocity model. Also the angle gather is calculated to see if the velocity model is correct. By inverting for all the source image, source wavelet and the velocity model, the proposed method produces good estimates of the source location, ignition time and the background velocity for part of the SEG overthrust model.
International Nuclear Information System (INIS)
Gibson, David J.; Anderson, Scott G.; Barty, Christopher P.J.; Betts, Shawn M.; Booth, Rex; Brown, Winthrop J.; Crane, John K.; Cross, Robert R.; Fittinghoff, David N.; Hartemann, Fred V.; Kuba, Jaroslav; Le Sage, Gregory P.; Slaughter, Dennis R.; Tremaine, Aaron M.; Wootton, Alan J.; Hartouni, Edward P.; Springer, Paul T.; Rosenzweig, James B.
2004-01-01
The PLEIADES (Picosecond Laser-Electron Inter-Action for the Dynamical Evaluation of Structures) facility has produced first light at 70 keV. This milestone offers a new opportunity to develop laser-driven, compact, tunable x-ray sources for critical applications such as diagnostics for the National Ignition Facility and time-resolved material studies. The electron beam was focused to 50 μm rms, at 57 MeV, with 260 pC of charge, a relative energy spread of 0.2%, and a normalized emittance of 5 mm mrad horizontally and 13 mm mrad vertically. The scattered 820 nm laser pulse had an energy of 180 mJ and a duration of 54 fs. Initial x rays were captured with a cooled charge-coupled device using a cesium iodide scintillator; the peak photon energy was approximately 78 keV, with a total x-ray flux of 1.3x10 6 photons/shot, and the observed angular distribution found to agree very well with three-dimensional codes. Simple K-edge radiography of a tantalum foil showed good agreement with the theoretical divergence-angle dependence of the x-ray energy. Optimization of the x-ray dose is currently under way, with the goal of reaching 10 8 photons/shot and a peak brightness approaching 10 20 photons/mm 2 /mrad 2 /s/0.1% bandwidth
Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo
2014-05-01
Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.
Integrating the Toda Lattice with Self-Consistent Source via Inverse Scattering Method
International Nuclear Information System (INIS)
Urazboev, Gayrat
2012-01-01
In this work, there is shown that the solutions of Toda lattice with self-consistent source can be found by the inverse scattering method for the discrete Sturm-Liuville operator. For the considered problem the one-soliton solution is obtained.
Digital Repository Service at National Institute of Oceanography (India)
Tripathy, G.R.; Das, Anirban.
used methods, the Least Square Regression (LSR) and Inverse Modeling (IM), to determine the contributions of (i) solutes from different sources to global river water, and (ii) various rocks to a glacial till. The purpose of this exercise is to compare...
A New Wave Equation Based Source Location Method with Full-waveform Inversion
Wu, Zedong; Alkhalifah, Tariq Ali
2017-01-01
with illumination artifacts. We develop a waveform inversion approach with an additional penalty term in the objective function to reward the focusing of the source image. This penalty term is relaxed early to allow for data fitting, and avoid cycle skipping, using
Linearized versus non-linear inverse methods for seismic localization of underground sources
DEFF Research Database (Denmark)
Oh, Geok Lian; Jacobsen, Finn
2013-01-01
The problem of localization of underground sources from seismic measurements detected by several geophones located on the ground surface is addressed. Two main approaches to the solution of the problem are considered: a beamforming approach that is derived from the linearized inversion problem, a...
Source Identification in Structural Acoustics with an Inverse Frequency Response Function Technique
Visser, Rene
2002-01-01
Inverse source identification based on acoustic measurements is essential for the investigation and understanding of sound fields generated by structural vibrations of various devices and machinery. Acoustic pressure measurements performed on a grid in the nearfield of a surface can be used to
Visco-elastic controlled-source full waveform inversion without surface waves
Paschke, Marco; Krause, Martin; Bleibinhaus, Florian
2016-04-01
We developed a frequency-domain visco-elastic full waveform inversion for onshore seismic experiments with topography. The forward modeling is based on a finite-difference time-domain algorithm by Robertsson that uses the image-method to ensure a stress-free condition at the surface. The time-domain data is Fourier-transformed at every point in the model space during the forward modeling for a given set of frequencies. The motivation for this approach is the reduced amount of memory when computing kernels, and the straightforward implementation of the multiscale approach. For the inversion, we calculate the Frechet derivative matrix explicitly, and we implement a Levenberg-Marquardt scheme that allows for computing the resolution matrix. To reduce the size of the Frechet derivative matrix, and to stabilize the inversion, an adapted inverse mesh is used. The node spacing is controlled by the velocity distribution and the chosen frequencies. To focus the inversion on body waves (P, P-coda, and S) we mute the surface waves from the data. Consistent spatiotemporal weighting factors are applied to the wavefields during the Fourier transform to obtain the corresponding kernels. We test our code with a synthetic study using the Marmousi model with arbitrary topography. This study also demonstrates the importance of topography and muting surface waves in controlled-source full waveform inversion.
Evangeliou, Nikolaos; Hamburger, Thomas; Cozic, Anne; Balkanski, Yves; Stohl, Andreas
2017-07-01
This paper describes the results of an inverse modeling study for the determination of the source term of the radionuclides 134Cs, 137Cs and 131I released after the Chernobyl accident. The accident occurred on 26 April 1986 in the Former Soviet Union and released about 1019 Bq of radioactive materials that were transported as far away as the USA and Japan. Thereafter, several attempts to assess the magnitude of the emissions were made that were based on the knowledge of the core inventory and the levels of the spent fuel. More recently, when modeling tools were further developed, inverse modeling techniques were applied to the Chernobyl case for source term quantification. However, because radioactivity is a sensitive topic for the public and attracts a lot of attention, high-quality measurements, which are essential for inverse modeling, were not made available except for a few sparse activity concentration measurements far from the source and far from the main direction of the radioactive fallout. For the first time, we apply Bayesian inversion of the Chernobyl source term using not only activity concentrations but also deposition measurements from the most recent public data set. These observations refer to a data rescue attempt that started more than 10 years ago, with a final goal to provide available measurements to anyone interested. In regards to our inverse modeling results, emissions of 134Cs were estimated to be 80 PBq or 30-50 % higher than what was previously published. From the released amount of 134Cs, about 70 PBq were deposited all over Europe. Similar to 134Cs, emissions of 137Cs were estimated as 86 PBq, on the same order as previously reported results. Finally, 131I emissions of 1365 PBq were found, which are about 10 % less than the prior total releases. The inversion pushes the injection heights of the three radionuclides to higher altitudes (up to about 3 km) than previously assumed (≈ 2.2 km) in order to better match both concentration
Directory of Open Access Journals (Sweden)
N. Evangeliou
2017-07-01
Full Text Available This paper describes the results of an inverse modeling study for the determination of the source term of the radionuclides 134Cs, 137Cs and 131I released after the Chernobyl accident. The accident occurred on 26 April 1986 in the Former Soviet Union and released about 1019 Bq of radioactive materials that were transported as far away as the USA and Japan. Thereafter, several attempts to assess the magnitude of the emissions were made that were based on the knowledge of the core inventory and the levels of the spent fuel. More recently, when modeling tools were further developed, inverse modeling techniques were applied to the Chernobyl case for source term quantification. However, because radioactivity is a sensitive topic for the public and attracts a lot of attention, high-quality measurements, which are essential for inverse modeling, were not made available except for a few sparse activity concentration measurements far from the source and far from the main direction of the radioactive fallout. For the first time, we apply Bayesian inversion of the Chernobyl source term using not only activity concentrations but also deposition measurements from the most recent public data set. These observations refer to a data rescue attempt that started more than 10 years ago, with a final goal to provide available measurements to anyone interested. In regards to our inverse modeling results, emissions of 134Cs were estimated to be 80 PBq or 30–50 % higher than what was previously published. From the released amount of 134Cs, about 70 PBq were deposited all over Europe. Similar to 134Cs, emissions of 137Cs were estimated as 86 PBq, on the same order as previously reported results. Finally, 131I emissions of 1365 PBq were found, which are about 10 % less than the prior total releases. The inversion pushes the injection heights of the three radionuclides to higher altitudes (up to about 3 km than previously assumed (≈ 2.2 km in order
The Compton generator revisited
Siboni, S.
2014-09-01
The Compton generator, introduced in 1913 by the US physicist A H Compton as a relatively simple device to detect the Earth's rotation with respect to the distant stars, is analyzed and discussed in a general perspective. The paper introduces a generalized definition of the generator, emphasizing the special features of the original apparatus, and provides a suggestive interpretation of the way the device works. To this end, an intriguing electromagnetic analogy is developed, which turns out to be particularly useful in simplifying the calculations. Besides the more extensive description of the Compton generator in itself, the combined use of concepts and methods coming from different fields of physics, such as particle dynamics in moving references frames, continuum mechanics and electromagnetism, may be of interest to both teachers and graduate students.
Directory of Open Access Journals (Sweden)
A. Richter
2009-11-01
Full Text Available Tropospheric glyoxal and formaldehyde columns retrieved from the SCIAMACHY satellite instrument in 2005 are used with the IMAGESv2 global chemistry-transport model and its adjoint in a two-compound inversion scheme designed to estimate the continental source of glyoxal. The formaldehyde observations provide an important constraint on the production of glyoxal from isoprene in the model, since the degradation of isoprene constitutes an important source of both glyoxal and formaldehyde. Current modelling studies underestimate largely the observed glyoxal satellite columns, pointing to the existence of an additional land glyoxal source of biogenic origin. We include an extra glyoxal source in the model and we explore its possible distribution and magnitude through two inversion experiments. In the first case, the additional source is represented as a direct glyoxal emission, and in the second, as a secondary formation through the oxidation of an unspecified glyoxal precursor. Besides this extra source, the inversion scheme optimizes the primary glyoxal and formaldehyde emissions, as well as their secondary production from other identified non-methane volatile organic precursors of anthropogenic, pyrogenic and biogenic origin.
In the first inversion experiment, the additional direct source, estimated at 36 Tg/yr, represents 38% of the global continental source, whereas the contribution of isoprene is equally important (30%, the remainder being accounted for by anthropogenic (20% and pyrogenic fluxes. The inversion succeeds in reducing the underestimation of the glyoxal columns by the model, but it leads to a severe overestimation of glyoxal surface concentrations in comparison with in situ measurements. In the second scenario, the inferred total global continental glyoxal source is estimated at 108 Tg/yr, almost two times higher than the global a priori source. The extra secondary source is the largest contribution to the global glyoxal
Energy Technology Data Exchange (ETDEWEB)
Annuar, A.; Gandhi, P.; Alexander, D. M.; Lansbury, G. B.; Moro, A. Del [Centre for Extragalactic Astronomy, Department of Physics, University of Durham, South Road, Durham, DH1 3LE (United Kingdom); Arévalo, P. [Instituto de Física y Astronomía, Facultad de Ciencias, Universidad de Valparaíso, Gran Bretana N 1111, Playa Ancha, Valparaíso (Chile); Ballantyne, D. R. [Center for Relativistic Astrophysics, School of Physics, Georgia Institute of Technology, Atlanta, GA 30332 (United States); Baloković, M.; Brightman, M.; Harrison, F. A. [Cahill Center for Astronomy and Astrophysics, California Institute of Technology, Pasadena, CA 91125 (United States); Bauer, F. E. [EMBIGGEN Anillo, Concepción (Chile); Boggs, S. E.; Craig, W. W. [Space Sciences Laboratory, University of California, Berkeley, CA 94720 (United States); Brandt, W. N. [Department of Astronomy and Astrophysics, The Pennsylvania State University, 525 Davey Lab, University Park, PA 16802 (United States); Christensen, F. E. [DTU Space, National Space Institute, Technical University of Denmark, Elektrovej 327, DK-2800 Lyngby (Denmark); Hailey, C. J. [Columbia Astrophysics Laboratory, Columbia University, New York, NY 10027 (United States); Hickox, R. C. [Department of Physics and Astronomy, Dartmouth College, 6127 Wilder Laboratory, Hanover, NH 03755 (United States); Matt, G. [Dipartimento di Matematica e Fisica, Universitá degli Studi Roma Tre, via della Vasca Navale 84, I-00146 Roma (Italy); Puccetti, S. [ASI Science Data Center, via Galileo Galilei, I-00044 Frascati (Italy); Ricci, C. [Department of Astronomy, Kyoto University, Kitashirakawa-Oiwake-cho, Sakyo-ku, Kyoto 606-8502 (Japan); and others
2015-12-10
We present two Nuclear Spectroscopic Telescope Array (NuSTAR) observations of the local Seyfert 2 active galactic nucleus (AGN) and an ultraluminous X-ray source (ULX) candidate in NGC 5643. Together with archival data from Chandra, XMM-Newton, and Swift-BAT, we perform a high-quality broadband spectral analysis of the AGN over two decades in energy (∼0.5–100 keV). Previous X-ray observations suggested that the AGN is obscured by a Compton-thick (CT) column of obscuring gas along our line of sight. However, the lack of high-quality ≳10 keV observations, together with the presence of a nearby X-ray luminous source, NGC 5643 X–1, have left significant uncertainties in the characterization of the nuclear spectrum. NuSTAR now enables the AGN and NGC 5643 X–1 to be separately resolved above 10 keV for the first time and allows a direct measurement of the absorbing column density toward the nucleus. The new data show that the nucleus is indeed obscured by a CT column of N{sub H} ≳ 5 × 10{sup 24} cm{sup −2}. The range of 2–10 keV absorption-corrected luminosity inferred from the best-fitting models is L{sub 2–10,int} = (0.8–1.7) × 10{sup 42} erg s{sup −1}, consistent with that predicted from multiwavelength intrinsic luminosity indicators. In addition, we also study the NuSTAR data for NGC 5643 X–1 and show that it exhibits evidence of a spectral cutoff at energy E ∼ 10 keV, similar to that seen in other ULXs observed by NuSTAR. Along with the evidence for significant X-ray luminosity variations in the 3–8 keV band from 2003 to 2014, our results further strengthen the ULX classification of NGC 5643 X–1.
Sparse contrast-source inversion using linear-shrinkage-enhanced inexact Newton method
Desmal, Abdulla
2014-07-01
A contrast-source inversion scheme is proposed for microwave imaging of domains with sparse content. The scheme uses inexact Newton and linear shrinkage methods to account for the nonlinearity and ill-posedness of the electromagnetic inverse scattering problem, respectively. Thresholded shrinkage iterations are accelerated using a preconditioning technique. Additionally, during Newton iterations, the weight of the penalty term is reduced consistently with the quadratic convergence of the Newton method to increase accuracy and efficiency. Numerical results demonstrate the applicability of the proposed method.
Sparse contrast-source inversion using linear-shrinkage-enhanced inexact Newton method
Desmal, Abdulla; Bagci, Hakan
2014-01-01
A contrast-source inversion scheme is proposed for microwave imaging of domains with sparse content. The scheme uses inexact Newton and linear shrinkage methods to account for the nonlinearity and ill-posedness of the electromagnetic inverse scattering problem, respectively. Thresholded shrinkage iterations are accelerated using a preconditioning technique. Additionally, during Newton iterations, the weight of the penalty term is reduced consistently with the quadratic convergence of the Newton method to increase accuracy and efficiency. Numerical results demonstrate the applicability of the proposed method.
New Insights on the Uncertainties in Finite-Fault Earthquake Source Inversion
Razafindrakoto, Hoby
2015-04-01
New Insights on the Uncertainties in Finite-Fault Earthquake Source Inversion Hoby Njara Tendrisoa Razafindrakoto Earthquake source inversion is a non-linear problem that leads to non-unique solutions. The aim of this dissertation is to understand the uncertainty and reliability in earthquake source inversion, as well as to quantify variability in earthquake rupture models. The source inversion is performed using a Bayesian inference. This technique augments optimization approaches through its ability to image the entire solution space which is consistent with the data and prior information. In this study, the uncertainty related to the choice of source-time function and crustal structure is investigated. Three predefined analytical source-time functions are analyzed; isosceles triangle, Yoffe with acceleration time of 0.1 and 0.3 s. The use of the isosceles triangle as source-time function is found to bias the finite-fault source inversion results. It accelerates the rupture to propagate faster compared to that of the Yoffe function. Moreover, it generates an artificial linear correlation between parameters that does not exist for the Yoffe source-time functions. The effect of inadequate knowledge of Earth’s crustal structure in earthquake rupture models is subsequently investigated. The results show that one-dimensional structure variability leads to parameters resolution changes, with a broadening of the posterior 5 PDFs and shifts in the peak location. These changes in the PDFs of kinematic parameters are associated with the blurring effect of using incorrect Earth structure. As an application to real earthquake, finite-fault source models for the 2009 L’Aquila earthquake are examined using one- and three-dimensional crustal structures. One- dimensional structure is found to degrade the data fitting. However, there is no significant effect on the rupture parameters aside from differences in the spatial slip extension. Stable features are maintained for both
pyGIMLi: An open-source library for modelling and inversion in geophysics
Rücker, Carsten; Günther, Thomas; Wagner, Florian M.
2017-12-01
Many tasks in applied geosciences cannot be solved by single measurements, but require the integration of geophysical, geotechnical and hydrological methods. Numerical simulation techniques are essential both for planning and interpretation, as well as for the process understanding of modern geophysical methods. These trends encourage open, simple, and modern software architectures aiming at a uniform interface for interdisciplinary and flexible modelling and inversion approaches. We present pyGIMLi (Python Library for Inversion and Modelling in Geophysics), an open-source framework that provides tools for modelling and inversion of various geophysical but also hydrological methods. The modelling component supplies discretization management and the numerical basis for finite-element and finite-volume solvers in 1D, 2D and 3D on arbitrarily structured meshes. The generalized inversion framework solves the minimization problem with a Gauss-Newton algorithm for any physical forward operator and provides opportunities for uncertainty and resolution analyses. More general requirements, such as flexible regularization strategies, time-lapse processing and different sorts of coupling individual methods are provided independently of the actual methods used. The usage of pyGIMLi is first demonstrated by solving the steady-state heat equation, followed by a demonstration of more complex capabilities for the combination of different geophysical data sets. A fully coupled hydrogeophysical inversion of electrical resistivity tomography (ERT) data of a simulated tracer experiment is presented that allows to directly reconstruct the underlying hydraulic conductivity distribution of the aquifer. Another example demonstrates the improvement of jointly inverting ERT and ultrasonic data with respect to saturation by a new approach that incorporates petrophysical relations in the inversion. Potential applications of the presented framework are manifold and include time
Efficient Inversion of Mult-frequency and Multi-Source Electromagnetic Data
Energy Technology Data Exchange (ETDEWEB)
Gary D. Egbert
2007-03-22
The project covered by this report focused on development of efficient but robust non-linear inversion algorithms for electromagnetic induction data, in particular for data collected with multiple receivers, and multiple transmitters, a situation extremely common in eophysical EM subsurface imaging methods. A key observation is that for such multi-transmitter problems each step in commonly used linearized iterative limited memory search schemes such as conjugate gradients (CG) requires solution of forward and adjoint EM problems for each of the N frequencies or sources, essentially generating data sensitivities for an N dimensional data-subspace. These multiple sensitivities allow a good approximation to the full Jacobian of the data mapping to be built up in many fewer search steps than would be required by application of textbook optimization methods, which take no account of the multiplicity of forward problems that must be solved for each search step. We have applied this idea to a develop a hybrid inversion scheme that combines features of the iterative limited memory type methods with a Newton-type approach using a partial calculation of the Jacobian. Initial tests on 2D problems show that the new approach produces results essentially identical to a Newton type Occam minimum structure inversion, while running more rapidly than an iterative (fixed regularization parameter) CG style inversion. Memory requirements, while greater than for something like CG, are modest enough that even in 3D the scheme should allow 3D inverse problems to be solved on a common desktop PC, at least for modest (~ 100 sites, 15-20 frequencies) data sets. A secondary focus of the research has been development of a modular system for EM inversion, using an object oriented approach. This system has proven useful for more rapid prototyping of inversion algorithms, in particular allowing initial development and testing to be conducted with two-dimensional example problems, before
Ambient Seismic Source Inversion in a Heterogeneous Earth: Theory and Application to the Earth's Hum
Ermert, Laura; Sager, Korbinian; Afanasiev, Michael; Boehm, Christian; Fichtner, Andreas
2017-11-01
The sources of ambient seismic noise are extensively studied both to better understand their influence on ambient noise tomography and related techniques, and to infer constraints on their excitation mechanisms. Here we develop a gradient-based inversion method to infer the space-dependent and time-varying source power spectral density of the Earth's hum from cross correlations of continuous seismic data. The precomputation of wavefields using spectral elements allows us to account for both finite-frequency sensitivity and for three-dimensional Earth structure. Although similar methods have been proposed previously, they have not yet been applied to data to the best of our knowledge. We apply this method to image the seasonally varying sources of Earth's hum during North and South Hemisphere winter. The resulting models suggest that hum sources are localized, persistent features that occur at Pacific coasts or shelves and in the North Atlantic during North Hemisphere winter, as well as South Pacific coasts and several distinct locations in the Southern Ocean in South Hemisphere winter. The contribution of pelagic sources from the central North Pacific cannot be constrained. Besides improving the accuracy of noise source locations through the incorporation of finite-frequency effects and 3-D Earth structure, this method may be used in future cross-correlation waveform inversion studies to provide initial source models and source model updates.
International Nuclear Information System (INIS)
Zhou, Jianmei; Shang, Qinglong; Wang, Hongnian; Wang, Jianxun; Yin, Changchun
2014-01-01
We present an algorithm for inverting controlled source audio-frequency magnetotelluric (CSAMT) data in horizontally layered transversely isotropic (TI) media. The popular inversion method parameterizes the media into a large number of layers which have fixed thickness and only reconstruct the conductivities (e.g. Occam's inversion), which does not enable the recovery of the sharp interfaces between layers. In this paper, we simultaneously reconstruct all the model parameters, including both the horizontal and vertical conductivities and layer depths. Applying the perturbation principle and the dyadic Green's function in TI media, we derive the analytic expression of Fréchet derivatives of CSAMT responses with respect to all the model parameters in the form of Sommerfeld integrals. A regularized iterative inversion method is established to simultaneously reconstruct all the model parameters. Numerical results show that the inverse algorithm, including the depths of the layer interfaces, can significantly improve the inverse results. It can not only reconstruct the sharp interfaces between layers, but also can obtain conductivities close to the true value. (paper)
Iterative and range test methods for an inverse source problem for acoustic waves
International Nuclear Information System (INIS)
Alves, Carlos; Kress, Rainer; Serranho, Pedro
2009-01-01
We propose two methods for solving an inverse source problem for time-harmonic acoustic waves. Based on the reciprocity gap principle a nonlinear equation is presented for the locations and intensities of the point sources that can be solved via Newton iterations. To provide an initial guess for this iteration we suggest a range test algorithm for approximating the source locations. We give a mathematical foundation for the range test and exhibit its feasibility in connection with the iteration method by some numerical examples
Voltmeter with Compton electrons
Energy Technology Data Exchange (ETDEWEB)
Pereira, N R; Gorbics, S G; Weidenheimer, D M [Berkeley Research Associates, Springfield, VA (United States)
1997-12-31
A technique to measure the electron end point energy of bremsstrahlung in the MV regime using only two detectors is described. One of the detector measures the total radiation, the other filters out all except the hardest photons by looking only at their Compton electrons, whose average energy is determined with a magnetic field. (author). 4 figs., 2 refs.
Fast sampling algorithm for the simulation of photon Compton scattering
International Nuclear Information System (INIS)
Brusa, D.; Salvat, F.
1996-01-01
A simple algorithm for the simulation of Compton interactions of unpolarized photons is described. The energy and direction of the scattered photon, as well as the active atomic electron shell, are sampled from the double-differential cross section obtained by Ribberfors from the relativistic impulse approximation. The algorithm consistently accounts for Doppler broadening and electron binding effects. Simplifications of Ribberfors' formula, required for efficient random sampling, are discussed. The algorithm involves a combination of inverse transform, composition and rejection methods. A parameterization of the Compton profile is proposed from which the simulation of Compton events can be performed analytically in terms of a few parameters that characterize the target atom, namely shell ionization energies, occupation numbers and maximum values of the one-electron Compton profiles. (orig.)
Ragon, Théa; Sladen, Anthony; Simons, Mark
2018-05-01
The ill-posed nature of earthquake source estimation derives from several factors including the quality and quantity of available observations and the fidelity of our forward theory. Observational errors are usually accounted for in the inversion process. Epistemic errors, which stem from our simplified description of the forward problem, are rarely dealt with despite their potential to bias the estimate of a source model. In this study, we explore the impact of uncertainties related to the choice of a fault geometry in source inversion problems. The geometry of a fault structure is generally reduced to a set of parameters, such as position, strike and dip, for one or a few planar fault segments. While some of these parameters can be solved for, more often they are fixed to an uncertain value. We propose a practical framework to address this limitation by following a previously implemented method exploring the impact of uncertainties on the elastic properties of our models. We develop a sensitivity analysis to small perturbations of fault dip and position. The uncertainties in fault geometry are included in the inverse problem under the formulation of the misfit covariance matrix that combines both prediction and observation uncertainties. We validate this approach with the simplified case of a fault that extends infinitely along strike, using both Bayesian and optimization formulations of a static inversion. If epistemic errors are ignored, predictions are overconfident in the data and source parameters are not reliably estimated. In contrast, inclusion of uncertainties in fault geometry allows us to infer a robust posterior source model. Epistemic uncertainties can be many orders of magnitude larger than observational errors for great earthquakes (Mw > 8). Not accounting for uncertainties in fault geometry may partly explain observed shallow slip deficits for continental earthquakes. Similarly, ignoring the impact of epistemic errors can also bias estimates of
Real-time Inversion of Tsunami Source from GNSS Ground Deformation Observations and Tide Gauges.
Arcas, D.; Wei, Y.
2017-12-01
Over the last decade, the NOAA Center for Tsunami Research (NCTR) has developed an inversion technique to constrain tsunami sources based on the use of Green's functions in combination with data reported by NOAA's Deep-ocean Assessment and Reporting of Tsunamis (DART®) systems. The system has consistently proven effective in providing highly accurate tsunami forecasts of wave amplitude throughout an entire basin. However, improvement is necessary in two critical areas: reduction of data latency for near-field tsunami predictions and reduction of maintenance cost of the network. Two types of sensors have been proposed as supplementary to the existing network of DART®systems: Global Navigation Satellite System (GNSS) stations and coastal tide gauges. The use GNSS stations to provide autonomous geo-spatial positioning at specific sites during an earthquake has been proposed in recent years to supplement the DART® array in tsunami source inversion. GNSS technology has the potential to provide substantial contributions in the two critical areas of DART® technology where improvement is most necessary. The present study uses GNSS ground displacement observations of the 2011 Tohoku-Oki earthquake in combination with NCTR operational database of Green's functions, to produce a rapid estimate of tsunami source based on GNSS observations alone. The solution is then compared with that obtained via DART® data inversion and the difficulties in obtaining an accurate GNSS-based solution are underlined. The study also identifies the set of conditions required for source inversion from coastal tide-gauges using the degree of nonlinearity of the signal as a primary criteria. We then proceed to identify the conditions and scenarios under which a particular gage could be used to invert a tsunami source.
Constraining earthquake source inversions with GPS data: 1. Resolution-based removal of artifacts
Page, M.T.; Custodio, S.; Archuleta, R.J.; Carlson, J.M.
2009-01-01
We present a resolution analysis of an inversion of GPS data from the 2004 Mw 6.0 Parkfield earthquake. This earthquake was recorded at thirteen 1-Hz GPS receivers, which provides for a truly coseismic data set that can be used to infer the static slip field. We find that the resolution of our inverted slip model is poor at depth and near the edges of the modeled fault plane that are far from GPS receivers. The spatial heterogeneity of the model resolution in the static field inversion leads to artifacts in poorly resolved areas of the fault plane. These artifacts look qualitatively similar to asperities commonly seen in the final slip models of earthquake source inversions, but in this inversion they are caused by a surplus of free parameters. The location of the artifacts depends on the station geometry and the assumed velocity structure. We demonstrate that a nonuniform gridding of model parameters on the fault can remove these artifacts from the inversion. We generate a nonuniform grid with a grid spacing that matches the local resolution length on the fault and show that it outperforms uniform grids, which either generate spurious structure in poorly resolved regions or lose recoverable information in well-resolved areas of the fault. In a synthetic test, the nonuniform grid correctly averages slip in poorly resolved areas of the fault while recovering small-scale structure near the surface. Finally, we present an inversion of the Parkfield GPS data set on the nonuniform grid and analyze the errors in the final model. Copyright 2009 by the American Geophysical Union.
International Nuclear Information System (INIS)
Gladkikh, P.I.; Telegin, Yu.N.; Karnaukhov, I.M.
2002-01-01
The feasibility of the development of intense X-ray sources based on Compton scattering in laser-electron storage rings is discussed. The results of the electron beam dynamics simulation involving Compton and intrabeam scattering are presented
Gladkikh, P I; Karnaukhov, I M
2002-01-01
The feasibility of the development of intense X-ray sources based on Compton scattering in laser-electron storage rings is discussed. The results of the electron beam dynamics simulation involving Compton and intrabeam scattering are presented.
Energy Technology Data Exchange (ETDEWEB)
Pratt, R.H., E-mail: rpratt@pitt.ed [Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA 15260 (United States); LaJohn, L.A., E-mail: lal18@pitt.ed [Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA 15260 (United States); Florescu, V., E-mail: flor@barutu.fizica.unibuc.r [Centre for Advanced Quantum Physics, University of Bucharest, MG-11 Bucharest-Magurele, 077125 Magurele (Romania); Suric, T., E-mail: suric@irb.h [R. Boskovic Institute, Bijenicka 54, 10000 Zagreb (Croatia); Chatterjee, B.K., E-mail: barun_k_chatterjee@yahoo.co [Department of Physics, Bose Institute, Kolkata 700009 (India); Roy, S.C., E-mail: suprakash.roy@gmail.co [Department of Physics, Bose Institute, Kolkata 700009 (India)
2010-02-15
We review the standard theory of Compton scattering from bound electrons, and we describe recent findings that require modification of the usual understanding, noting the nature of consequences for experiment. The subject began with Compton and scattering from free electrons. Experiment actually involved bound electrons, and this was accommodated with the use of impulse approximation (IA), which described inelastic scattering from bound electrons in terms of scattering from free electrons. This was good for the Compton peak but failed for soft final photons. The standard theory was formalized by Eisenberger and Platzman (EP) [1970. Phys. Rev. A 2, 415], whose work also suggested why impulse approximation was better than one would expect, for doubly differential cross sections (DDCS), but not for triply differential cross sections (TDCS). A relativistic version of IA (RIA) was worked out by Ribberfors [1975. Phys. Rev. B 12, 2067]. And Suric et al. [1991. Phys. Rev. Lett. 67, 189] and Bergstrom et al. [1993. Phys. Rev. A 48, 1134] developed a full relativistic second order S-matrix treatment, not making impulse approximation, but within independent particle approximation (IPA). Newer developments in the theory of Compton scattering include: (1) Demonstration that the EP estimates of the validity of IA are incorrect, although the qualitative conclusion remains unchanged; IA is not to be understood as the first term in a standard series expansion. (2) The greater validity of IA for DDCS than for the TDCS, which when integrated give DDCS, is related to the existence of a sum rule, only valid for DDCS. (3) The so-called 'asymmetry' of a Compton profile is primarily to be understood as simply the shift of the peak position in the profile; symmetric and anti-symmetric deviations from a shifted Compton profile are very small, except for high Z inner shells where further p{sup -}>.A{sup -}> effects come into play. (4) Most relativistic effects, except at low
International Nuclear Information System (INIS)
Pratt, R.H.; LaJohn, L.A.; Florescu, V.; Suric, T.; Chatterjee, B.K.; Roy, S.C.
2010-01-01
We review the standard theory of Compton scattering from bound electrons, and we describe recent findings that require modification of the usual understanding, noting the nature of consequences for experiment. The subject began with Compton and scattering from free electrons. Experiment actually involved bound electrons, and this was accommodated with the use of impulse approximation (IA), which described inelastic scattering from bound electrons in terms of scattering from free electrons. This was good for the Compton peak but failed for soft final photons. The standard theory was formalized by Eisenberger and Platzman (EP) [1970. Phys. Rev. A 2, 415], whose work also suggested why impulse approximation was better than one would expect, for doubly differential cross sections (DDCS), but not for triply differential cross sections (TDCS). A relativistic version of IA (RIA) was worked out by Ribberfors [1975. Phys. Rev. B 12, 2067]. And Suric et al. [1991. Phys. Rev. Lett. 67, 189] and Bergstrom et al. [1993. Phys. Rev. A 48, 1134] developed a full relativistic second order S-matrix treatment, not making impulse approximation, but within independent particle approximation (IPA). Newer developments in the theory of Compton scattering include: (1) Demonstration that the EP estimates of the validity of IA are incorrect, although the qualitative conclusion remains unchanged; IA is not to be understood as the first term in a standard series expansion. (2) The greater validity of IA for DDCS than for the TDCS, which when integrated give DDCS, is related to the existence of a sum rule, only valid for DDCS. (3) The so-called 'asymmetry' of a Compton profile is primarily to be understood as simply the shift of the peak position in the profile; symmetric and anti-symmetric deviations from a shifted Compton profile are very small, except for high Z inner shells where further p → .A → effects come into play. (4) Most relativistic effects, except at low energies, are to be
Multisource inverse-geometry CT. Part II. X-ray source design and prototype
Energy Technology Data Exchange (ETDEWEB)
Neculaes, V. Bogdan, E-mail: neculaes@ge.com; Caiafa, Antonio; Cao, Yang; De Man, Bruno; Edic, Peter M.; Frutschy, Kristopher; Gunturi, Satish; Inzinna, Lou; Reynolds, Joseph; Vermilyea, Mark; Wagner, David; Zhang, Xi; Zou, Yun [GE Global Research, Niskayuna, New York 12309 (United States); Pelc, Norbert J. [Department of Radiology, Stanford University, Stanford, California 94305 (United States); Lounsberry, Brian [Healthcare Science Technology, GE Healthcare, West Milwaukee, Wisconsin 53219 (United States)
2016-08-15
Purpose: This paper summarizes the development of a high-power distributed x-ray source, or “multisource,” designed for inverse-geometry computed tomography (CT) applications [see B. De Man et al., “Multisource inverse-geometry CT. Part I. System concept and development,” Med. Phys. 43, 4607–4616 (2016)]. The paper presents the evolution of the source architecture, component design (anode, emitter, beam optics, control electronics, high voltage insulator), and experimental validation. Methods: Dispenser cathode emitters were chosen as electron sources. A modular design was adopted, with eight electron emitters (two rows of four emitters) per module, wherein tungsten targets were brazed onto copper anode blocks—one anode block per module. A specialized ceramic connector provided high voltage standoff capability and cooling oil flow to the anode. A matrix topology and low-noise electronic controls provided switching of the emitters. Results: Four modules (32 x-ray sources in two rows of 16) have been successfully integrated into a single vacuum vessel and operated on an inverse-geometry computed tomography system. Dispenser cathodes provided high beam current (>1000 mA) in pulse mode, and the electrostatic lenses focused the current beam to a small optical focal spot size (0.5 × 1.4 mm). Controlled emitter grid voltage allowed the beam current to be varied for each source, providing the ability to modulate beam current across the fan of the x-ray beam, denoted as a virtual bowtie filter. The custom designed controls achieved x-ray source switching in <1 μs. The cathode-grounded source was operated successfully up to 120 kV. Conclusions: A high-power, distributed x-ray source for inverse-geometry CT applications was successfully designed, fabricated, and operated. Future embodiments may increase the number of spots and utilize fast read out detectors to increase the x-ray flux magnitude further, while still staying within the stationary target inherent
An Adaptive Observer-Based Algorithm for Solving Inverse Source Problem for the Wave Equation
Asiri, Sharefa M.; Zayane, Chadia; Laleg-Kirati, Taous-Meriem
2015-01-01
Observers are well known in control theory. Originally designed to estimate the hidden states of dynamical systems given some measurements, the observers scope has been recently extended to the estimation of some unknowns, for systems governed by partial differential equations. In this paper, observers are used to solve inverse source problem for a one-dimensional wave equation. An adaptive observer is designed to estimate the state and source components for a fully discretized system. The effectiveness of the algorithm is emphasized in noise-free and noisy cases and an insight on the impact of measurements’ size and location is provided.
An Adaptive Observer-Based Algorithm for Solving Inverse Source Problem for the Wave Equation
Asiri, Sharefa M.
2015-08-31
Observers are well known in control theory. Originally designed to estimate the hidden states of dynamical systems given some measurements, the observers scope has been recently extended to the estimation of some unknowns, for systems governed by partial differential equations. In this paper, observers are used to solve inverse source problem for a one-dimensional wave equation. An adaptive observer is designed to estimate the state and source components for a fully discretized system. The effectiveness of the algorithm is emphasized in noise-free and noisy cases and an insight on the impact of measurements’ size and location is provided.
Entekhabi, Mozhgan Nora; Isakov, Victor
2018-05-01
In this paper, we will study the increasing stability in the inverse source problem for the Helmholtz equation in the plane when the source term is assumed to be compactly supported in a bounded domain Ω with a sufficiently smooth boundary. Using the Fourier transform in the frequency domain, bounds for the Hankel functions and for scattering solutions in the complex plane, improving bounds for the analytic continuation, and the exact observability for the wave equation led us to our goals which are a sharp uniqueness and increasing stability estimate when the wave number interval is growing.
Elastic frequency-domain finite-difference contrast source inversion method
International Nuclear Information System (INIS)
He, Qinglong; Chen, Yong; Han, Bo; Li, Yang
2016-01-01
In this work, we extend the finite-difference contrast source inversion (FD-CSI) method to the frequency-domain elastic wave equations, where the parameters describing the subsurface structure are simultaneously reconstructed. The FD-CSI method is an iterative nonlinear inversion method, which exhibits several strengths. First, the finite-difference operator only relies on the background media and the given angular frequency, both of which are unchanged during inversion. Therefore, the matrix decomposition is performed only once at the beginning of the iteration if a direct solver is employed. This makes the inversion process relatively efficient in terms of the computational cost. In addition, the FD-CSI method automatically normalizes different parameters, which could avoid the numerical problems arising from the difference of the parameter magnitude. We exploit a parallel implementation of the FD-CSI method based on the domain decomposition method, ensuring a satisfactory scalability for large-scale problems. A simple numerical example with a homogeneous background medium is used to investigate the convergence of the elastic FD-CSI method. Moreover, the Marmousi II model proposed as a benchmark for testing seismic imaging methods is presented to demonstrate the performance of the elastic FD-CSI method in an inhomogeneous background medium. (paper)
Davari, Sadegh; Sha, Lui
1992-01-01
In the design of real-time systems, tasks are often assigned priorities. Preemptive priority driven schedulers are used to schedule tasks to meet the timing requirements. Priority inversion is the term used to describe the situation when a higher priority task's execution is delayed by lower priority tasks. Priority inversion can occur when there is contention for resources among tasks of different priorities. The duration of priority inversion could be long enough to cause tasks to miss their dead lines. Priority inversion cannot be completely eliminated. However, it is important to identify sources of priority inversion and minimize the duration of priority inversion. In this paper, a comprehensive review of the problem of and solutions to unbounded priority inversion is presented.
Source-jerk analysis using a semi-explicit inverse kinetic technique
International Nuclear Information System (INIS)
Spriggs, G.D.; Pederson, R.A.
1985-01-01
A method is proposed for measuring the effective reproduction factor, k, in subcritical systems. The method uses the transient response of a subcritical system to the sudden removal of an extraneous neutron source (i.e., a source jerk). The response is analyzed using an inverse kinetic technique that least-squares fits the exact analytical solution corresponding to a source-jerk transient as derived from the point-reactor model. It has been found that the technique can provide an accurate means of measuring k in systems that are close to critical (i.e., 0.95 < k < 1.0). As a system becomes more subcritical (i.e., k << 1.0) spatial effects can introduce significant biases depending on the source and detector positions. However, methods are available that can correct for these biases and, hence, can allow measuring subcriticality in systems with k as low as 0.5. 12 refs., 3 figs
Source-jerk analysis using a semi-explicit inverse kinetic technique
International Nuclear Information System (INIS)
Spriggs, G.D.; Pederson, R.A.
1985-01-01
A method is proposed for measuring the effective reproduction factor, k, in subcritical systems. The method uses the transient responses of a subcritical system to the sudden removal of an extraneous neutron source (i.e., a source jerk). The response is analyzed using an inverse kinetic technique that least-squares fits the exact analytical solution corresponding to a source-jerk transient as derived from the point-reactor model. It has been found that the technique can provide an accurate means of measuring k in systems that are close to critical (i.e., 0.95 < k < 1.0). As a system becomes more subcritical (i.e., k << 1.0) spatial effects can introduce significant biases depending on the source and detector positions. However, methods are available that can correct for these biases and, hence, can allow measuring subcriticality in systems with k as low as 0.5
International Nuclear Information System (INIS)
Okuyama, Shinichi; Sera, Koichiro; Fukuda, Hiroshi; Shishido, Fumio; Matsuzawa, Taiju
1977-01-01
Tomographic images of an object are obtainable by irradiating it with a collimated beam of monochromatic gamma rays and recording the resultant Compton rays scattered upward at right angles. This is the scattered-ray principle of the formation of a radiation image that differs from the traditional ''silhouette principle'' of radiography, and that bears prospects of stereopsis as well as cross-section tomography. (Evans, J.)
Energy Technology Data Exchange (ETDEWEB)
Ziock, Klaus-Peter [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Braverman, Joshua B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Harrison, Mark J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hornback, Donald Eric [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fabris, Lorenzo [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Newby, Jason [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2013-09-26
Stand-off detection is one of the most important radiation detection capabilities for arms control and the control of illicit nuclear materials. For long range passive detection one requires a large detector and a means of “seeing through” the naturally occurring and varying background radiation, i.e. imaging. Arguably, Compton imaging is the best approach over much of the emission band suitable for long range detection. It provides not only imaging, but more information about the direction of incidence of each detected gamma-ray than the alternate approach of coded-aperture imaging. The directional information allows one to reduce the background and hence improve the sensitivity of a measurement. However, to make an efficient Compton imager requires localizing and measuring the simultaneous energy depositions when gamma-rays Compton scatter and are subsequently captured within a single, large detector volume. This concept has been demonstrated in semi-conductor detectors (HPGe, CZT, Si) but at ~ $1k/cm^{3} these materials are too expensive to build the large systems needed for standoff detection. Scintillator detectors, such as NaI(Tl), are two orders of magnitude less expensive and possess the energy resolution required to make such an imager. However, they do not currently have the ability to localize closely spaced, simultaneous energy depositions in a single large crystal. In this project we are applying a new technique that should, for the first time ever, allow cubic-millimeter event localization in a bulk scintillator crystal.
Skeletonized inversion of surface wave: Active source versus controlled noise comparison
Li, Jing; Hanafy, Sherif
2016-01-01
We have developed a skeletonized inversion method that inverts the S-wave velocity distribution from surface-wave dispersion curves. Instead of attempting to fit every wiggle in the surface waves with predicted data, it only inverts the picked dispersion curve, thereby mitigating the problem of getting stuck in a local minimum. We have applied this method to a synthetic model and seismic field data from Qademah fault, located at the western side of Saudi Arabia. For comparison, we have performed dispersion analysis for an active and controlled noise source seismic data that had some receivers in common with the passive array. The active and passive data show good agreement in the dispersive characteristics. Our results demonstrated that skeletonized inversion can obtain reliable 1D and 2D S-wave velocity models for our geologic setting. A limitation is that we need to build layered initial model to calculate the Jacobian matrix, which is time consuming.
Skeletonized inversion of surface wave: Active source versus controlled noise comparison
Li, Jing
2016-07-14
We have developed a skeletonized inversion method that inverts the S-wave velocity distribution from surface-wave dispersion curves. Instead of attempting to fit every wiggle in the surface waves with predicted data, it only inverts the picked dispersion curve, thereby mitigating the problem of getting stuck in a local minimum. We have applied this method to a synthetic model and seismic field data from Qademah fault, located at the western side of Saudi Arabia. For comparison, we have performed dispersion analysis for an active and controlled noise source seismic data that had some receivers in common with the passive array. The active and passive data show good agreement in the dispersive characteristics. Our results demonstrated that skeletonized inversion can obtain reliable 1D and 2D S-wave velocity models for our geologic setting. A limitation is that we need to build layered initial model to calculate the Jacobian matrix, which is time consuming.
International Nuclear Information System (INIS)
Winiarek, Victor
2014-01-01
Uncontrolled releases of pollutant in the atmosphere may be the consequence of various situations: accidents, for instance leaks or explosions in an industrial plant, or terrorist attacks such as biological bombs, especially in urban areas. In the event of such situations, authorities' objectives are various: predict the contaminated zones to apply first countermeasures such as evacuation of concerned population; determine the source location; assess the long-term polluted areas, for instance by deposition of persistent pollutants in the soil. To achieve these objectives, numerical models can be used to model the atmospheric dispersion of pollutants. We will first present the different processes that govern the transport of pollutants in the atmosphere, then the different numerical models that are commonly used in this context. The choice between these models mainly depends of the scale and the details one seeks to take into account. We will then present several inverse modeling methods to estimate the emission as well as statistical methods to estimate prior errors, to which the inversion is very sensitive. Several case studies are presented, using synthetic data as well as real data such as the estimation of source terms from the Fukushima accident in March 2011. From our results, we estimate the Cesium-137 emission to be between 12 and 19 PBq with a standard deviation between 15 and 65% and the Iodine-131 emission to be between 190 and 380 PBq with a standard deviation between 5 and 10%. Concerning the localization of an unknown source of pollutant, two strategies can be considered. On one hand parametric methods use a limited number of parameters to characterize the source term to be reconstructed. To do so, strong assumptions are made on the nature of the source. The inverse problem is hence to estimate these parameters. On the other hand nonparametric methods attempt to reconstruct a full emission field. Several parametric and nonparametric methods are
On an inverse source problem for enhanced oil recovery by wave motion maximization in reservoirs
Karve, Pranav M.
2014-12-28
© 2014, Springer International Publishing Switzerland. We discuss an optimization methodology for focusing wave energy to subterranean formations using strong motion actuators placed on the ground surface. The motivation stems from the desire to increase the mobility of otherwise entrapped oil. The goal is to arrive at the spatial and temporal description of surface sources that are capable of maximizing mobility in the target reservoir. The focusing problem is posed as an inverse source problem. The underlying wave propagation problems are abstracted in two spatial dimensions, and the semi-infinite extent of the physical domain is negotiated by a buffer of perfectly-matched-layers (PMLs) placed at the domain’s truncation boundary. We discuss two possible numerical implementations: Their utility for deciding the tempo-spatial characteristics of optimal wave sources is shown via numerical experiments. Overall, the simulations demonstrate the inverse source method’s ability to simultaneously optimize load locations and time signals leading to the maximization of energy delivery to a target formation.
Energy Technology Data Exchange (ETDEWEB)
Fanelli, Cristiano V. [Sapienza Univ. of Rome (Italy)
2015-03-01
In this thesis work, results of the analysis of the polarization transfers measured in real Compton scattering (RCS) by the Collaboration E07-002 at the Je fferson Lab Hall-C are presented. The data were collected at large scattering angle (theta_cm = 70deg) and with a polarized incident photon beam at an average energy of 3.8 GeV. Such a kind of experiments allows one to understand more deeply the reaction mechanism, that involves a real photon, by extracting both Compton form factors and Generalized Parton Distributions (GPDs) (also relevant for possibly shedding light on the total angular momentum of the nucleon). The obtained results for the longitudinal and transverse polarization transfers K_LL and K_LT, are of crucial importance, since they confirm unambiguously the disagreement between experimental data and pQCD prediction, as it was found in E99-114 experiment, and favor the Handbag mechanism. The E99-114 and E07-002 results can contribute to attract new interest on the great yield of the Compton scattering by a nucleon target, as demonstrated by the recent approval of an experimental proposal submitted to the Jefferson Lab PAC 42 for a Wide-angle Compton Scattering experiment, at 8 and 10 GeV Photon Energies. The new experiments approved to run with the updated 12 GeV electron beam at JLab, are characterized by much higher luminosities, and a new GEM tracker is under development to tackle the challenging backgrounds. Within this context, we present a new multistep tracking algorithm, based on (i) a Neural Network (NN) designed for a fast and efficient association of the hits measured by the GEM detector which allows the track identification, and (ii) the application of both a Kalman filter and Rauch-Tung-Striebel smoother to further improve the track reconstruction. The full procedure, i.e. NN and filtering, appears very promising, with high performances in terms of both association effciency and reconstruction accuracy, and these preliminary results will
Directory of Open Access Journals (Sweden)
J. S. de Villiers
2014-10-01
Full Text Available This research focuses on the inversion of geomagnetic variation field measurement to obtain source currents in the ionosphere. During a geomagnetic disturbance, the ionospheric currents create magnetic field variations that induce geoelectric fields, which drive geomagnetically induced currents (GIC in power systems. These GIC may disturb the operation of power systems and cause damage to grounded power transformers. The geoelectric fields at any location of interest can be determined from the source currents in the ionosphere through a solution of the forward problem. Line currents running east–west along given surface position are postulated to exist at a certain height above the Earth's surface. This physical arrangement results in the fields on the ground having the magnetic north and down components, and the electric east component. Ionospheric currents are modelled by inverting Fourier integrals (over the wavenumber of elementary geomagnetic fields using the Levenberg–Marquardt technique. The output parameters of the inversion model are the current strength, height and surface position of the ionospheric current system. A ground conductivity structure with five layers from Quebec, Canada, based on the Layered-Earth model is used to obtain the complex skin depth at a given angular frequency. This paper presents preliminary and inversion results based on these structures and simulated geomagnetic fields. The results show some interesting features in the frequency domain. Model parameters obtained through inversion are within 2% of simulated values. This technique has applications for modelling the currents of electrojets at the equator and auroral regions, as well as currents in the magnetosphere.
Micro-seismic Imaging Using a Source Independent Waveform Inversion Method
Wang, Hanchen
2016-04-18
Micro-seismology is attracting more and more attention in the exploration seismology community. The main goal in micro-seismic imaging is to find the source location and the ignition time in order to track the fracture expansion, which will help engineers monitor the reservoirs. Conventional imaging methods work fine in this field but there are many limitations such as manual picking, incorrect migration velocity and low signal to noise ratio (S/N). In traditional surface survey imaging, full waveform inversion (FWI) is widely used. The FWI method updates the velocity model by minimizing the misfit between the observed data and the predicted data. Using FWI to locate and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. Use the FWI technique, and overcomes the difficulties of manual pickings and incorrect velocity model for migration. However, the technique of waveform inversion of micro-seismic events faces its own problems. There is significant nonlinearity due to the unknown source location (space) and function (time). We have developed a source independent FWI of micro-seismic events to simultaneously invert for the source image, source function and velocity model. It is based on convolving reference traces with the observed and modeled data to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. To examine the accuracy of the inverted source image and velocity model the extended image for source wavelet in z-axis is extracted. Also the angle gather is calculated to check the applicability of the migration velocity. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity in the synthetic experiments with both parts of the Marmousi and the SEG
Compton Operator in Quantum Electrodynamics
International Nuclear Information System (INIS)
Garcia, Hector Luna; Garcia, Luz Maria
2015-01-01
In the frame in the quantum electrodynamics exist four basic operators; the electron self-energy, vacuum polarization, vertex correction, and the Compton operator. The first three operators are very important by its relation with renormalized and Ward identity. However, the Compton operator has equal importance, but without divergence, and little attention has been given it. We have calculated the Compton operator and obtained the closed expression for it in the frame of dimensionally continuous integration and hypergeometric functions
Uncertainty principles for inverse source problems for electromagnetic and elastic waves
Griesmaier, Roland; Sylvester, John
2018-06-01
In isotropic homogeneous media, far fields of time-harmonic electromagnetic waves radiated by compactly supported volume currents, and elastic waves radiated by compactly supported body force densities can be modelled in very similar fashions. Both are projected restricted Fourier transforms of vector-valued source terms. In this work we generalize two types of uncertainty principles recently developed for far fields of scalar-valued time-harmonic waves in Griesmaier and Sylvester (2017 SIAM J. Appl. Math. 77 154–80) to this vector-valued setting. These uncertainty principles yield stability criteria and algorithms for splitting far fields radiated by collections of well-separated sources into the far fields radiated by individual source components, and for the restoration of missing data segments. We discuss proper regularization strategies for these inverse problems, provide stability estimates based on the new uncertainty principles, and comment on reconstruction schemes. A numerical example illustrates our theoretical findings.
International Nuclear Information System (INIS)
Wang, Wenyan; Han, Bo; Yamamoto, Masahiro
2013-01-01
We propose a new numerical method for reproducing kernel Hilbert space to solve an inverse source problem for a two-dimensional fractional diffusion equation, where we are required to determine an x-dependent function in a source term by data at the final time. The exact solution is represented in the form of a series and the approximation solution is obtained by truncating the series. Furthermore, a technique is proposed to improve some of the existing methods. We prove that the numerical method is convergent under an a priori assumption of the regularity of solutions. The method is simple to implement. Our numerical result shows that our method is effective and that it is robust against noise in L 2 -space in reconstructing a source function. (paper)
Determination of the direction to a source of antineutrinos via inverse beta decay in Double Chooz
Nikitenko, Ya.
2016-11-01
To determine the direction to a source of neutrinos (and antineutrinos) is an important problem for the physics of supernovae and of the Earth. The direction to a source of antineutrinos can be estimated through the reaction of inverse beta decay. We show that the reactor neutrino experiment Double Chooz has unique capabilities to study antineutrino signal from point-like sources. Contemporary experimental data on antineutrino directionality is given. A rigorous mathematical approach for neutrino direction studies has been developed. Exact expressions for the precision of the simple mean estimator of neutrinos' direction for normal and exponential distributions for a finite sample and for the limiting case of many events have been obtained.
Long-period noise source inversion in a 3-D heterogeneous Earth
Sager, K.; Ermert, L. A.; Afanasiev, M.; Boehm, C.; Fichtner, A.
2017-12-01
We have implemented a new method for ambient noise source inversion that fully honors finite-frequency wave propagation and 3-D heterogeneous Earth structure.Here, we present results of its first application to the Earth's long-period background signal, the hum, in a period band of around 120 - 300 s. In addition to being a computationally convenient test case, the hum is also the topic of ongoing research in its own right, because different physical mechanisms have been proposed for its excitation. The broad patterns of this model for South and North hemisphere winter are qualitatively consistent with previous long-term studies of the hum sources; however, thanks to methodological improvements, the iterative refinement, and the use of a comparatively extensive dataset, we retrieve a more detailed model in certain locations. In particular, our results support findings that the dominant hum sources are focused along coasts and shelf areas, particularly in the North hemisphere winter, with a possible though not well-constrained contribution of pelagic sources. Additionally, our findings indicate that hum source locations in the ocean, tentatively linked to locally high bathymetry, are important contributors particularly during South hemisphere winter. These results, in conjunction with synthetic recovery tests and observed cross-correlation waveforms, suggest that hum sources are rather narrowly concentrated in space, with length scales on the order of few hundred kilometers. Future work includes the extension of the model to spring and fall season and to shorter periods, as well as its use in full-waveform ambient noise inversion for 3-D Earth structure.
Joint Inversion of Earthquake Source Parameters with local and teleseismic body waves
Chen, W.; Ni, S.; Wang, Z.
2011-12-01
In the classical source parameter inversion algorithm of CAP (Cut and Paste method, by Zhao and Helmberger), waveform data at near distances (typically less than 500km) are partitioned into Pnl and surface waves to account for uncertainties in the crustal models and different amplitude weight of body and surface waves. The classical CAP algorithms have proven effective for resolving source parameters (focal mechanisms, depth and moment) for earthquakes well recorded on relatively dense seismic network. However for regions covered with sparse stations, it is challenging to achieve precise source parameters . In this case, a moderate earthquake of ~M6 is usually recorded on only one or two local stations with epicentral distances less than 500 km. Fortunately, an earthquake of ~M6 can be well recorded on global seismic networks. Since the ray paths for teleseismic and local body waves sample different portions of the focal sphere, combination of teleseismic and local body wave data helps constrain source parameters better. Here we present a new CAP mothod (CAPjoint), which emploits both teleseismic body waveforms (P and SH waves) and local waveforms (Pnl, Rayleigh and Love waves) to determine source parameters. For an earthquake in Nevada that is well recorded with dense local network (USArray stations), we compare the results from CAPjoint with those from the traditional CAP method involving only of local waveforms , and explore the efficiency with bootstraping statistics to prove the results derived by CAPjoint are stable and reliable. Even with one local station included in joint inversion, accuracy of source parameters such as moment and strike can be much better improved.
Source modeling and inversion with near real-time GPS: a GITEWS perspective for Indonesia
Babeyko, A. Y.; Hoechner, A.; Sobolev, S. V.
2010-07-01
We present the GITEWS approach to source modeling for the tsunami early warning in Indonesia. Near-field tsunami implies special requirements to both warning time and details of source characterization. To meet these requirements, we employ geophysical and geological information to predefine a maximum number of rupture parameters. We discretize the tsunamigenic Sunda plate interface into an ordered grid of patches (150×25) and employ the concept of Green's functions for forward and inverse rupture modeling. Rupture Generator, a forward modeling tool, additionally employs different scaling laws and slip shape functions to construct physically reasonable source models using basic seismic information only (magnitude and epicenter location). GITEWS runs a library of semi- and fully-synthetic scenarios to be extensively employed by system testing as well as by warning center personnel teaching and training. Near real-time GPS observations are a very valuable complement to the local tsunami warning system. Their inversion provides quick (within a few minutes on an event) estimation of the earthquake magnitude, rupture position and, in case of sufficient station coverage, details of slip distribution.
Dettmer, J.; Benavente, R. F.; Cummins, P. R.
2017-12-01
This work considers probabilistic, non-linear centroid moment tensor inversion of data from earthquakes at teleseismic distances. The moment tensor is treated as deviatoric and centroid location is parametrized with fully unknown latitude, longitude, depth and time delay. The inverse problem is treated as fully non-linear in a Bayesian framework and the posterior density is estimated with interacting Markov chain Monte Carlo methods which are implemented in parallel and allow for chain interaction. The source mechanism and location, including uncertainties, are fully described by the posterior probability density and complex trade-offs between various metrics are studied. These include the percent of double couple component as well as fault orientation and the probabilistic results are compared to results from earthquake catalogs. Additional focus is on the analysis of complex events which are commonly not well described by a single point source. These events are studied by jointly inverting for multiple centroid moment tensor solutions. The optimal number of sources is estimated by the Bayesian information criterion to ensure parsimonious solutions. [Supported by NSERC.
Definition of the form of coal spontaneous combustion source as the inverse problem of geoelectrics
Directory of Open Access Journals (Sweden)
Sirota Dmitry
2017-01-01
Full Text Available The paper reviews the method of determining the shape and size of the coal self-heating source on coal pit benches and in coal piles during mining of coal by the open method. The method is based on the regularity found in the 1970s of the previous century and related to the distribution of potential of the natural electrical field arising from the temperature in the vicinity of the center of self-heating. The problem is reduced to the solution of inverse ill-posed problem of mathematical physics. The study presents the developed algorithm of its solution and the results of numerical simulation.
Klimasewski, A.; Sahakian, V. J.; Baltay, A.; Boatwright, J.; Fletcher, J. B.; Baker, L. M.
2017-12-01
A large source of epistemic uncertainty in Ground Motion Prediction Equations (GMPEs) is derived from the path term, currently represented as a simple geometric spreading and intrinsic attenuation term. Including additional physical relationships between the path properties and predicted ground motions would produce more accurate and precise, region-specific GMPEs by reclassifying some of the random, aleatory uncertainty as epistemic. This study focuses on regions of Southern California, using data from the Anza network and Southern California Seismic network to create a catalog of events magnitude 2.5 and larger from 1998 to 2016. The catalog encompasses regions of varying geology and therefore varying path and site attenuation. Within this catalog of events, we investigate several collections of event region-to-station pairs, each of which share similar origin locations and stations so that all events have similar paths. Compared with a simple regional GMPE, these paths consistently have high or low residuals. By working with events that have the same path, we can isolate source and site effects, and focus on the remaining residual as path effects. We decompose the recordings into source and site spectra for each unique event and site in our greater Southern California regional database using the inversion method of Andrews (1986). This model represents each natural log record spectra as the sum of its natural log event and site spectra, while constraining each record to a reference site or Brune source spectrum. We estimate a regional, path-specific anelastic attenuation (Q) and site attenuation (t*) from the inversion site spectra and corner frequency from the inversion event spectra. We then compute the residuals between the observed record data, and the inversion model prediction (event*site spectra). This residual is representative of path effects, likely anelastic attenuation along the path that varies from the regional median attenuation. We examine the
Gross, Lutz; Altinay, Cihan; Fenwick, Joel; Smith, Troy
2014-05-01
The program package escript has been designed for solving mathematical modeling problems using python, see Gross et al. (2013). Its development and maintenance has been funded by the Australian Commonwealth to provide open source software infrastructure for the Australian Earth Science community (recent funding by the Australian Geophysical Observing System EIF (AGOS) and the AuScope Collaborative Research Infrastructure Scheme (CRIS)). The key concepts of escript are based on the terminology of spatial functions and partial differential equations (PDEs) - an approach providing abstraction from the underlying spatial discretization method (i.e. the finite element method (FEM)). This feature presents a programming environment to the user which is easy to use even for complex models. Due to the fact that implementations are independent from data structures simulations are easily portable across desktop computers and scalable compute clusters without modifications to the program code. escript has been successfully applied in a variety of applications including modeling mantel convection, melting processes, volcanic flow, earthquakes, faulting, multi-phase flow, block caving and mineralization (see Poulet et al. 2013). The recent escript release (see Gross et al. (2013)) provides an open framework for solving joint inversion problems for geophysical data sets (potential field, seismic and electro-magnetic). The strategy bases on the idea to formulate the inversion problem as an optimization problem with PDE constraints where the cost function is defined by the data defect and the regularization term for the rock properties, see Gross & Kemp (2013). This approach of first-optimize-then-discretize avoids the assemblage of the - in general- dense sensitivity matrix as used in conventional approaches where discrete programming techniques are applied to the discretized problem (first-discretize-then-optimize). In this paper we will discuss the mathematical framework for
Full–waveform inversion using the excitation representation of the source wavefield
Kalita, Mahesh
2016-09-06
Full waveform inversion (FWI) is an iterative method of data-fitting, aiming at high resolution recovery of the unknown model parameters. However, it is a cumbersome process, requiring a long computational time and large memory space/disc storage. One of the reasons for this computational limitation is the gradient calculation step. Based on the adjoint state method, it involves the temporal cross-correlation of the forward propagated source wavefield with the backward propagated residuals, in which we usually need to store the source wavefield, or include an extra extrapolation step to propagate the source wavefield from its storage at the boundary. We propose, alternatively, an amplitude excitation gradient calculation based on the excitation imaging condition concept that represents the source wavefield history by a single, specifically the most energetic arrival. An excitation based Born modeling allows us to derive the adjoint operation. In this case, the source wavelet is injected by a cross-correlation step applied to the data residual directly. Representing the source wavefield through the excitation amplitude and time, we reduce the large requirements for both storage and the computational time. We demonstrate the application of this approach on a 2-layer model with an anomaly and the Marmousi II model.
Full–waveform inversion using the excitation representation of the source wavefield
Kalita, Mahesh; Alkhalifah, Tariq Ali
2016-01-01
Full waveform inversion (FWI) is an iterative method of data-fitting, aiming at high resolution recovery of the unknown model parameters. However, it is a cumbersome process, requiring a long computational time and large memory space/disc storage. One of the reasons for this computational limitation is the gradient calculation step. Based on the adjoint state method, it involves the temporal cross-correlation of the forward propagated source wavefield with the backward propagated residuals, in which we usually need to store the source wavefield, or include an extra extrapolation step to propagate the source wavefield from its storage at the boundary. We propose, alternatively, an amplitude excitation gradient calculation based on the excitation imaging condition concept that represents the source wavefield history by a single, specifically the most energetic arrival. An excitation based Born modeling allows us to derive the adjoint operation. In this case, the source wavelet is injected by a cross-correlation step applied to the data residual directly. Representing the source wavefield through the excitation amplitude and time, we reduce the large requirements for both storage and the computational time. We demonstrate the application of this approach on a 2-layer model with an anomaly and the Marmousi II model.
Source-Type Inversion of the September 03, 2017 DPRK Nuclear Test
Dreger, D. S.; Ichinose, G.; Wang, T.
2017-12-01
On September 3, 2017, the DPRK announced a nuclear test at their Punggye-ri site. This explosion registered a mb 6.3, and was well recorded by global and regional seismic networks. We apply the source-type inversion method (e.g. Ford et al., 2012; Nayak and Dreger, 2015), and the MDJ2 seismic velocity model (Ford et al., 2009) to invert low frequency (0.02 to 0.05 Hz) complete three-component waveforms, and first-motion polarities to map the goodness of fit in source-type space. We have used waveform data from the New China Digital Seismic Network (BJT, HIA, MDJ), Korean Seismic Network (TJN), and the Global Seismograph Network (INCN, MAJO). From this analysis, the event discriminates as an explosion. For a pure explosion model, we find a scalar seismic moment of 5.77e+16 Nm (Mw 5.1), however this model fails to fit the large Love waves registered on the transverse components. The best fitting complete solution finds a total moment of 8.90e+16 Nm (Mw 5.2) that is decomposed as 53% isotropic, 40% double-couple, and 7% CLVD, although the range of isotropic moment from the source-type analysis indicates that it could be as high as 60-80%. The isotropic moment in the source-type inversion is 4.75e16 Nm (Mw 5.05). Assuming elastic moduli from model MDJ2 the explosion cavity radius is approximately 51m, and the yield estimated using Denny and Johnson (1991) is 246kt. Approximately 8.5 minutes after the blast a second seismic event was registered, which is best characterized as a vertically closing horizontal crack, perhaps representing the partial collapse of the blast cavity, and/or a service tunnel. The total moment of the collapse is 3.34e+16 Nm (Mw 4.95). The volumetric moment of the collapse is 1.91e+16 Nm, approximately 1/3 to 1/2 of the explosive moment. German TerraSAR-X observations of deformation (Wang et al., 2017) reveal large radial outward motions consistent with expected deformation for an explosive source, but lack significant vertical motions above the
1991-01-01
This photograph shows the Compton Gamma-Ray Observatory (GRO) being deployed by the Remote Manipulator System (RMS) arm aboard the Space Shuttle Atlantis during the STS-37 mission in April 1991. The GRO reentered Earth atmosphere and ended its successful mission in June 2000. For nearly 9 years, the GRO Burst and Transient Source Experiment (BATSE), designed and built by the Marshall Space Flight Center (MSFC), kept an unblinking watch on the universe to alert scientists to the invisible, mysterious gamma-ray bursts that had puzzled them for decades. By studying gamma-rays from objects like black holes, pulsars, quasars, neutron stars, and other exotic objects, scientists could discover clues to the birth, evolution, and death of stars, galaxies, and the universe. The gamma-ray instrument was one of four major science instruments aboard the Compton. It consisted of eight detectors, or modules, located at each corner of the rectangular satellite to simultaneously scan the entire universe for bursts of gamma-rays ranging in duration from fractions of a second to minutes. In January 1999, the instrument, via the Internet, cued a computer-controlled telescope at Las Alamos National Laboratory in Los Alamos, New Mexico, within 20 seconds of registering a burst. With this capability, the gamma-ray experiment came to serve as a gamma-ray burst alert for the Hubble Space Telescope, the Chandra X-Ray Observatory, and major gound-based observatories around the world. Thirty-seven universities, observatories, and NASA centers in 19 states, and 11 more institutions in Europe and Russia, participated in the BATSE science program.
Landquake dynamics inferred from seismic source inversion: Greenland and Sichuan events of 2017
Chao, W. A.
2017-12-01
In June 2017 two catastrophic landquake events occurred in Greenland and Sichuan. The Greenland event leads to tsunami hazard in the small town of Nuugaarsiaq. A landquake in Sichuan hit the town, which resulted in over 100 death. Both two events generated the strong seismic signals recorded by the real-time global seismic network. I adopt an inversion algorithm to derive the landquake force time history (LFH) using the long-period waveforms, and the landslide volume ( 76 million m3) can be rapidly estimated, facilitating the tsunami-wave modeling for early warning purpose. Based on an integrated approach involving tsunami forward simulation and seismic waveform inversion, this study has significant implications to issuing actionable warnings before hazardous tsunami waves strike populated areas. Two single-forces (SFs) mechanism (two block model) yields the best explanation for Sichuan event, which demonstrates that secondary event (seismic inferred volume: 8.2 million m3) may be mobilized by collapse-mass hitting from initial rock avalanches ( 5.8 million m3), likely causing a catastrophic disaster. The later source with a force magnitude of 0.9967×1011 N occurred 70 seconds after first mass-movement occurrence. In contrast, first event has the smaller force magnitude of 0.8116×1011 N. In conclusion, seismically inferred physical parameters will substantially contribute to improving our understanding of landquake source mechanisms and mitigating similar hazards in other parts of the world.
Compton suppression naa in the analysis of food and beverages
International Nuclear Information System (INIS)
Ahmed, Y.A.; Ewa, I.O.B.; Umar, I.M.; Funtua, I.I.; Lanberger, S.; O'kelly, D.J.; Braisted, J.D.
2009-01-01
Applicability and performance of Compton suppression method in the analysis of food and beverages was re-established in this study. Using ''1''3''7Cs and ''6''0Co point sources Compton Suppression Factors (SF), Compton Reduction Factors (RF), Peak-to-Compton ratio (P/C), Compton Plateau (C p l), and Compton Edge (C e ) were determined for each of the two sources. The natural background reduction factors in the anticoincidence mode compared to the normal mode were evaluated. The reported R.F. values of the various Compton spectrometers for ''6''0Co source at energy 50-210 keV (backscattering region), 600 keV (Compton edge corresponding to 1173.2 keV gamma-ray) and 1110 keV (Compton edge corresponding to 1332.5 keV gamma-ray) were compared with that of the present work. Similarly the S.F. values of the spectrometers for ''1''3''7Cs source were compared at the backscattered energy region (S.F. b = 191-210 keV), Compton Plateau (S.F. p l = 350-370 keV), and Compton Edge (S.F. e = 471-470 keV) and all were found to follow a similar trend. We also compared peak reduction ratios for the two cobalt energies (1173.2 and 1332.5) with the ones reported in literature and two results agree well. Applicability of the method to food and beverages was put to test for twenty one major, minor, and trace elements (Ba, Sr, I, Br, Cu, V, Mg, Na, Cl, Mn, Ca, Sn,K, Cd, Zn, As, Sb, Ni, Cs, Fe, and Co) commonly found in food, milk, tea and tobacco. The elements were assayed using five National Institute for Standards and Technology (NIST) certified reference materials (Non-fat powdered milk, Apple leaves, Tomato leaves, and Citrus leaves). The results obtained shows good agreement with NIST certified values, indicating that the method is suitable for simultaneous determination of micro-nutrients, macro-nutrients and heavy elements in food and beverages without undue interference problems
Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten
2018-06-01
This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.
An Inverse Source Problem for a One-dimensional Wave Equation: An Observer-Based Approach
Asiri, Sharefa M.
2013-05-25
Observers are well known in the theory of dynamical systems. They are used to estimate the states of a system from some measurements. However, recently observers have also been developed to estimate some unknowns for systems governed by Partial differential equations. Our aim is to design an observer to solve inverse source problem for a one dimensional wave equation. Firstly, the problem is discretized in both space and time and then an adaptive observer based on partial field measurements (i.e measurements taken form the solution of the wave equation) is applied to estimate both the states and the source. We see the effectiveness of this observer in both noise-free and noisy cases. In each case, numerical simulations are provided to illustrate the effectiveness of this approach. Finally, we compare the performance of the observer approach with Tikhonov regularization approach.
Seismic signal simulation and study of underground nuclear sources by moment inversion
International Nuclear Information System (INIS)
Crusem, R.
1986-09-01
Some problems of underground nuclear explosions are examined from the seismological point of view. In the first part a model is developed for mean seismic propagation through the lagoon of Mururoa atoll and for calculation of synthetic seismograms (in intermediate fields: 5 to 20 km) by summation of discrete wave numbers. In the second part this ground model is used with a linear inversion method of seismic moments for estimation of elastic source terms equivalent to the nuclear source. Only the isotrope part is investigated solution stability is increased by using spectral smoothing and a minimal phase hypothesis. Some examples of applications are presented: total energy estimation of a nuclear explosion, simulation of mechanical effects induced by an underground explosion [fr
Efficient full waveform inversion using the excitation representation of the source wavefield
Kalita, Mahesh
2017-05-16
Full waveform inversion (FWI) is an iterative method of data-fitting, aiming at high-resolution recovery of the unknown model parameters. However, its conventional implementation is a cumbersome process, requiring a long computational time and large memory space/disk storage. One of the reasons for this computational limitation is the gradient calculation step. Based on the adjoint state method, it involves the temporal cross-correlation of the forward propagated sourcewavefield with the backward propagated residuals, inwhichwe usually need to store the source wavefield, or include an extra extrapolation step to propagate the source wavefield from its storage at the boundary. We propose, alternatively, an amplitude excitation gradient calculation based on the excitation imaging condition concept that represents the source wavefield history by a single, specifically the most energetic arrival. An excitation based Born modelling allows us to derive the adjoint operation. In this case, the source wavelet is injected by a cross-correlation step applied to the data residual directly. Representing the source wavefield through the excitation amplitude and time, we reduce the large requirements for both storage and the computational time. We demonstrate the application of this approach on a two-layer model with an anomaly, the Marmousi II model and a marine data set acquired by CGG.
Nelson, N.; Azmy, Y.; Gardner, R. P.; Mattingly, J.; Smith, R.; Worrall, L. G.; Dewji, S.
2017-11-01
Detector response functions (DRFs) are often used for inverse analysis. We compute the DRF of a sodium iodide (NaI) nuclear material holdup field detector using the code named g03 developed by the Center for Engineering Applications of Radioisotopes (CEAR) at NC State University. Three measurement campaigns were performed in order to validate the DRF's constructed by g03: on-axis detection of calibration sources, off-axis measurements of a highly enriched uranium (HEU) disk, and on-axis measurements of the HEU disk with steel plates inserted between the source and the detector to provide attenuation. Furthermore, this work quantifies the uncertainty of the Monte Carlo simulations used in and with g03, as well as the uncertainties associated with each semi-empirical model employed in the full DRF representation. Overall, for the calibration source measurements, the response computed by the DRF for the prediction of the full-energy peak region of responses was good, i.e. within two standard deviations of the experimental response. In contrast, the DRF tended to overestimate the Compton continuum by about 45-65% due to inadequate tuning of the electron range multiplier fit variable that empirically represents physics associated with electron transport that is not modeled explicitly in g03. For the HEU disk measurements, computed DRF responses tended to significantly underestimate (more than 20%) the secondary full-energy peaks (any peak of lower energy than the highest-energy peak computed) due to scattering in the detector collimator and aluminum can, which is not included in the g03 model. We ran a sufficiently large number of histories to ensure for all of the Monte Carlo simulations that the statistical uncertainties were lower than their experimental counterpart's Poisson uncertainties. The uncertainties associated with least-squares fits to the experimental data tended to have parameter relative standard deviations lower than the peak channel relative standard
Cannavo', Flavio; Scandura, Danila; Palano, Mimmo; Musumeci, Carla
2014-05-01
Seismicity and ground deformation represent the principal geophysical methods for volcano monitoring and provide important constraints on subsurface magma movements. The occurrence of migrating seismic swarms, as observed at several volcanoes worldwide, are commonly associated with dike intrusions. In addition, on active volcanoes, (de)pressurization and/or intrusion of magmatic bodies stress and deform the surrounding crustal rocks, often causing earthquakes randomly distributed in time within a volume extending about 5-10 km from the wall of the magmatic bodies. Despite advances in space-based, geodetic and seismic networks have significantly improved volcano monitoring in the last decades on an increasing worldwide number of volcanoes, quantitative models relating deformation and seismicity are not common. The observation of several episodes of volcanic unrest throughout the world, where the movement of magma through the shallow crust was able to produce local rotation of the ambient stress field, introduces an opportunity to improve the estimate of the parameters of a deformation source. In particular, during these episodes of volcanic unrest a radial pattern of P-axes of the focal mechanism solutions, similar to that of ground deformation, has been observed. Therefore, taking into account additional information from focal mechanisms data, we propose a novel approach to volcanic source modeling based on the joint inversion of deformation and focal plane solutions assuming that both observations are due to the same source. The methodology is first verified against a synthetic dataset of surface deformation and strain within the medium, and then applied to real data from an unrest episode occurred before the May 13th 2008 eruption at Mt. Etna (Italy). The main results clearly indicate as the joint inversion improves the accuracy of the estimated source parameters of about 70%. The statistical tests indicate that the source depth is the parameter with the highest
Finite-Source Inversion for the 2004 Parkfield Earthquake using 3D Velocity Model Green's Functions
Kim, A.; Dreger, D.; Larsen, S.
2008-12-01
We determine finite fault models of the 2004 Parkfield earthquake using 3D Green's functions. Because of the dense station coverage and detailed 3D velocity structure model in this region, this earthquake provides an excellent opportunity to examine how the 3D velocity structure affects the finite fault inverse solutions. Various studies (e.g. Michaels and Eberhart-Phillips, 1991; Thurber et al., 2006) indicate that there is a pronounced velocity contrast across the San Andreas Fault along the Parkfield segment. Also the fault zone at Parkfield is wide as evidenced by mapped surface faults and where surface slip and creep occurred in the 1966 and the 2004 Parkfield earthquakes. For high resolution images of the rupture process"Ait is necessary to include the accurate 3D velocity structure for the finite source inversion. Liu and Aurchuleta (2004) performed finite fault inversions using both 1D and 3D Green's functions for 1989 Loma Prieta earthquake using the same source paramerization and data but different Green's functions and found that the models were quite different. This indicates that the choice of the velocity model significantly affects the waveform modeling at near-fault stations. In this study, we used the P-wave velocity model developed by Thurber et al (2006) to construct the 3D Green's functions. P-wave speeds are converted to S-wave speeds and density using by the empirical relationships of Brocher (2005). Using a finite difference method, E3D (Larsen and Schultz, 1995), we computed the 3D Green's functions numerically by inserting body forces at each station. Using reciprocity, these Green's functions are recombined to represent the ground motion at each station due to the slip on the fault plane. First we modeled the waveforms of small earthquakes to validate the 3D velocity model and the reciprocity of the Green"fs function. In the numerical tests we found that the 3D velocity model predicted the individual phases well at frequencies lower than 0
Bagnardi, M.; Hooper, A. J.
2017-12-01
Inversions of geodetic observational data, such as Interferometric Synthetic Aperture Radar (InSAR) and Global Navigation Satellite System (GNSS) measurements, are often performed to obtain information about the source of surface displacements. Inverse problem theory has been applied to study magmatic processes, the earthquake cycle, and other phenomena that cause deformation of the Earth's interior and of its surface. Together with increasing improvements in data resolution, both spatial and temporal, new satellite missions (e.g., European Commission's Sentinel-1 satellites) are providing the unprecedented opportunity to access space-geodetic data within hours from their acquisition. To truly take advantage of these opportunities we must become able to interpret geodetic data in a rapid and robust manner. Here we present the open-source Geodetic Bayesian Inversion Software (GBIS; available for download at http://comet.nerc.ac.uk/gbis). GBIS is written in Matlab and offers a series of user-friendly and interactive pre- and post-processing tools. For example, an interactive function has been developed to estimate the characteristics of noise in InSAR data by calculating the experimental semi-variogram. The inversion software uses a Markov-chain Monte Carlo algorithm, incorporating the Metropolis-Hastings algorithm with adaptive step size, to efficiently sample the posterior probability distribution of the different source parameters. The probabilistic Bayesian approach allows the user to retrieve estimates of the optimal (best-fitting) deformation source parameters together with the associated uncertainties produced by errors in the data (and by scaling, errors in the model). The current version of GBIS (V1.0) includes fast analytical forward models for magmatic sources of different geometry (e.g., point source, finite spherical source, prolate spheroid source, penny-shaped sill-like source, and dipping-dike with uniform opening) and for dipping faults with uniform
Reproducible Hydrogeophysical Inversions through the Open-Source Library pyGIMLi
Wagner, F. M.; Rücker, C.; Günther, T.
2017-12-01
Many tasks in applied geosciences cannot be solved by a single measurement method and require the integration of geophysical, geotechnical and hydrological methods. In the emerging field of hydrogeophysics, researchers strive to gain quantitative information on process-relevant subsurface parameters by means of multi-physical models, which simulate the dynamic process of interest as well as its geophysical response. However, such endeavors are associated with considerable technical challenges, since they require coupling of different numerical models. This represents an obstacle for many practitioners and students. Even technically versatile users tend to build individually tailored solutions by coupling different (and potentially proprietary) forward simulators at the cost of scientific reproducibility. We argue that the reproducibility of studies in computational hydrogeophysics, and therefore the advancement of the field itself, requires versatile open-source software. To this end, we present pyGIMLi - a flexible and computationally efficient framework for modeling and inversion in geophysics. The object-oriented library provides management for structured and unstructured meshes in 2D and 3D, finite-element and finite-volume solvers, various geophysical forward operators, as well as Gauss-Newton based frameworks for constrained, joint and fully-coupled inversions with flexible regularization. In a step-by-step demonstration, it is shown how the hydrogeophysical response of a saline tracer migration can be simulated. Tracer concentration data from boreholes and measured voltages at the surface are subsequently used to estimate the hydraulic conductivity distribution of the aquifer within a single reproducible Python script.
A matrix-inversion method for gamma-source mapping from gamma-count data - 59082
International Nuclear Information System (INIS)
Bull, Richard K.; Adsley, Ian; Burgess, Claire
2012-01-01
Gamma ray counting is often used to survey the distribution of active waste material in various locations. Ideally the output from such surveys would be a map of the activity of the waste. In this paper a simple matrix-inversion method is presented. This allows an array of gamma-count data to be converted to an array of source activities. For each survey area the response matrix is computed using the gamma-shielding code Microshield [1]. This matrix links the activity array to the count array. The activity array is then obtained via matrix inversion. The method was tested on artificially-created arrays of count-data onto which statistical noise had been added. The method was able to reproduce, quite faithfully, the original activity distribution used to generate the dataset. The method has been applied to a number of practical cases, including the distribution of activated objects in a hot cell and to activated Nimonic springs amongst fuel-element debris in vaults at a nuclear plant. (authors)
Rotatable spin-polarized electron source for inverse-photoemission experiments
International Nuclear Information System (INIS)
Stolwijk, S. D.; Wortelen, H.; Schmidt, A. B.; Donath, M.
2014-01-01
We present a ROtatable Spin-polarized Electron source (ROSE) for the use in spin- and angle-resolved inverse-photoemission (SR-IPE) experiments. A key feature of the ROSE is a variable direction of the transversal electron beam polarization. As a result, the inverse-photoemission experiment becomes sensitive to two orthogonal in-plane polarization directions, and, for nonnormal electron incidence, to the out-of-plane polarization component. We characterize the ROSE and test its performance on the basis of SR-IPE experiments. Measurements on magnetized Ni films on W(110) serve as a reference to demonstrate the variable spin sensitivity. Moreover, investigations of the unoccupied spin-dependent surface electronic structure of Tl/Si(111) highlight the capability to analyze complex phenomena like spin rotations in momentum space. Essentially, the ROSE opens the way to further studies on complex spin-dependent effects in the field of surface magnetism and spin-orbit interaction at surfaces
Adjoint Inversion for Extended Earthquake Source Kinematics From Very Dense Strong Motion Data
Ampuero, J. P.; Somala, S.; Lapusta, N.
2010-12-01
Addressing key open questions about earthquake dynamics requires a radical improvement of the robustness and resolution of seismic observations of large earthquakes. Proposals for a new generation of earthquake observation systems include the deployment of “community seismic networks” of low-cost accelerometers in urban areas and the extraction of strong ground motions from high-rate optical images of the Earth's surface recorded by a large space telescope in geostationary orbit. Both systems could deliver strong motion data with a spatial density orders of magnitude higher than current seismic networks. In particular, a “space seismometer” could sample the seismic wave field at a spatio-temporal resolution of 100 m, 1 Hz over areas several 100 km wide with an amplitude resolution of few cm/s in ground velocity. The amount of data to process would be immensely larger than what current extended source inversion algorithms can handle, which hampers the quantitative assessment of the cost-benefit trade-offs that can guide the practical design of the proposed earthquake observation systems. We report here on the development of a scalable source imaging technique based on iterative adjoint inversion and its application to the proof-of-concept of a space seismometer. We generated synthetic ground motions for M7 earthquake rupture scenarios based on dynamic rupture simulations on a vertical strike-slip fault embedded in an elastic half-space. A range of scenarios include increasing levels of complexity and interesting features such as supershear rupture speed. The resulting ground shaking is then processed accordingly to what would be captured by an optical satellite. Based on the resulting data, we perform source inversion by an adjoint/time-reversal method. The gradient of a cost function quantifying the waveform misfit between data and synthetics is efficiently obtained by applying the time-reversed ground velocity residuals as surface force sources, back
International Nuclear Information System (INIS)
Beauchard, K; Cannarsa, P; Yamamoto, M
2014-01-01
The approach to Lipschitz stability for uniformly parabolic equations introduced by Imanuvilov and Yamamoto in 1998 based on Carleman estimates, seems hard to apply to the case of Grushin-type operators of interest to this paper. Indeed, such estimates are still missing for parabolic operators degenerating in the interior of the space domain. Nevertheless, we are able to prove Lipschitz stability results for inverse source problems for such operators, with locally distributed measurements in an arbitrary space dimension. For this purpose, we follow a mixed strategy which combines the approach due to Lebeau and Robbiano, relying on Fourier decomposition and Carleman inequalities for heat equations with non-smooth coefficients (solved by the Fourier modes). As a corollary, we obtain a direct proof of the observability of multidimensional Grushin-type parabolic equations, with locally distributed observations—which is equivalent to null controllability with locally distributed controls. (paper)
Wang, R.; Gu, Y. J.; Schultz, R.; Kim, A.; Chen, Y.
2015-12-01
During the past four years, the number of earthquakes with magnitudes greater than three has substantially increased in the southern section of Western Canada Sedimentary Basin (WCSB). While some of these events are likely associated with tectonic forces, especially along the foothills of the Canadian Rockies, a significant fraction occurred in previously quiescent regions and has been linked to waste water disposal or hydraulic fracturing. A proper assessment of the origin and source properties of these 'induced earthquakes' requires careful analyses and modeling of regional broadband data, which steadily improved during the past 8 years due to recent establishments of regional broadband seismic networks such as CRANE, RAVEN and TD. Several earthquakes, especially those close to fracking activities (e.g. Fox creek town, Alberta) are analyzed. Our preliminary full moment tensor inversion results show maximum horizontal compressional orientations (P-axis) along the northeast-southwest orientation, which agree with the regional stress directions from borehole breakout data and the P-axis of historical events. The decomposition of those moment tensors shows evidence of strike-slip mechanism with near vertical fault plane solutions, which are comparable to the focal mechanisms of injection induced earthquakes in Oklahoma. Minimal isotropic components have been observed, while a modest percentage of compensated-linear-vector-dipole (CLVD) components, which have been linked to fluid migraition, may be required to match the waveforms. To further evaluate the non-double-couple components, we compare the outcomes of full, deviatoric and pure double couple (DC) inversions using multiple frequency ranges and phases. Improved location and depth information from a novel grid search greatly assists the identification and classification of earthquakes in potential connection with fluid injection or extraction. Overall, a systematic comparison of the source attributes of
Inverse Source Data-Processing Strategies for Radio-Frequency Localization in Indoor Environments
Directory of Open Access Journals (Sweden)
Gianluca Gennarelli
2017-10-01
Full Text Available Indoor positioning of mobile devices plays a key role in many aspects of our daily life. These include real-time people tracking and monitoring, activity recognition, emergency detection, navigation, and numerous location based services. Despite many wireless technologies and data-processing algorithms have been developed in recent years, indoor positioning is still a problem subject of intensive research. This paper deals with the active radio-frequency (RF source localization in indoor scenarios. The localization task is carried out at the physical layer thanks to receiving sensor arrays which are deployed on the border of the surveillance region to record the signal emitted by the source. The localization problem is formulated as an imaging one by taking advantage of the inverse source approach. Different measurement configurations and data-processing/fusion strategies are examined to investigate their effectiveness in terms of localization accuracy under both line-of-sight (LOS and non-line of sight (NLOS conditions. Numerical results based on full-wave synthetic data are reported to support the analysis.
Lęski, Szymon; Kublik, Ewa; Swiejkowski, Daniel A; Wróbel, Andrzej; Wójcik, Daniel K
2010-12-01
Local field potentials have good temporal resolution but are blurred due to the slow spatial decay of the electric field. For simultaneous recordings on regular grids one can reconstruct efficiently the current sources (CSD) using the inverse Current Source Density method (iCSD). It is possible to decompose the resultant spatiotemporal information about the current dynamics into functional components using Independent Component Analysis (ICA). We show on test data modeling recordings of evoked potentials on a grid of 4 × 5 × 7 points that meaningful results are obtained with spatial ICA decomposition of reconstructed CSD. The components obtained through decomposition of CSD are better defined and allow easier physiological interpretation than the results of similar analysis of corresponding evoked potentials in the thalamus. We show that spatiotemporal ICA decompositions can perform better for certain types of sources but it does not seem to be the case for the experimental data studied. Having found the appropriate approach to decomposing neural dynamics into functional components we use the technique to study the somatosensory evoked potentials recorded on a grid spanning a large part of the forebrain. We discuss two example components associated with the first waves of activation of the somatosensory thalamus. We show that the proposed method brings up new, more detailed information on the time and spatial location of specific activity conveyed through various parts of the somatosensory thalamus in the rat.
Han, Jijun; Yang, Deqiang; Sun, Houjun; Xin, Sherman Xuegang
2017-01-01
Inverse method is inherently suitable for calculating the distribution of source current density related with an irregularly structured electromagnetic target field. However, the present form of inverse method cannot calculate complex field-tissue interactions. A novel hybrid inverse/finite-difference time domain (FDTD) method that can calculate the complex field-tissue interactions for the inverse design of source current density related with an irregularly structured electromagnetic target field is proposed. A Huygens' equivalent surface is established as a bridge to combine the inverse and FDTD method. Distribution of the radiofrequency (RF) magnetic field on the Huygens' equivalent surface is obtained using the FDTD method by considering the complex field-tissue interactions within the human body model. The obtained magnetic field distributed on the Huygens' equivalent surface is regarded as the next target. The current density on the designated source surface is derived using the inverse method. The homogeneity of target magnetic field and specific energy absorption rate are calculated to verify the proposed method.
Directory of Open Access Journals (Sweden)
M. Bauwens
2016-08-01
Full Text Available As formaldehyde (HCHO is a high-yield product in the oxidation of most volatile organic compounds (VOCs emitted by fires, vegetation, and anthropogenic activities, satellite observations of HCHO are well-suited to inform us on the spatial and temporal variability of the underlying VOC sources. The long record of space-based HCHO column observations from the Ozone Monitoring Instrument (OMI is used to infer emission flux estimates from pyrogenic and biogenic volatile organic compounds (VOCs on the global scale over 2005–2013. This is realized through the method of source inverse modeling, which consists in the optimization of emissions in a chemistry-transport model (CTM in order to minimize the discrepancy between the observed and modeled HCHO columns. The top–down fluxes are derived in the global CTM IMAGESv2 by an iterative minimization algorithm based on the full adjoint of IMAGESv2, starting from a priori emission estimates provided by the newly released GFED4s (Global Fire Emission Database, version 4s inventory for fires, and by the MEGAN-MOHYCAN inventory for isoprene emissions. The top–down fluxes are compared to two independent inventories for fire (GFAS and FINNv1.5 and isoprene emissions (MEGAN-MACC and GUESS-ES. The inversion indicates a moderate decrease (ca. 20 % in the average annual global fire and isoprene emissions, from 2028 Tg C in the a priori to 1653 Tg C for burned biomass, and from 343 to 272 Tg for isoprene fluxes. Those estimates are acknowledged to depend on the accuracy of formaldehyde data, as well as on the assumed fire emission factors and the oxidation mechanisms leading to HCHO production. Strongly decreased top–down fire fluxes (30–50 % are inferred in the peak fire season in Africa and during years with strong a priori fluxes associated with forest fires in Amazonia (in 2005, 2007, and 2010, bushfires in Australia (in 2006 and 2011, and peat burning in Indonesia (in 2006 and 2009, whereas
Reinwald, Michael; Bernauer, Moritz; Igel, Heiner; Donner, Stefanie
2016-10-01
With the prospects of seismic equipment being able to measure rotational ground motions in a wide frequency and amplitude range in the near future, we engage in the question of how this type of ground motion observation can be used to solve the seismic source inverse problem. In this paper, we focus on the question of whether finite-source inversion can benefit from additional observations of rotational motion. Keeping the overall number of traces constant, we compare observations from a surface seismic network with 44 three-component translational sensors (classic seismometers) with those obtained with 22 six-component sensors (with additional three-component rotational motions). Synthetic seismograms are calculated for known finite-source properties. The corresponding inverse problem is posed in a probabilistic way using the Shannon information content to measure how the observations constrain the seismic source properties. We minimize the influence of the source receiver geometry around the fault by statistically analyzing six-component inversions with a random distribution of receivers. Since our previous results are achieved with a regular spacing of the receivers, we try to answer the question of whether the results are dependent on the spatial distribution of the receivers. The results show that with the six-component subnetworks, kinematic source inversions for source properties (such as rupture velocity, rise time, and slip amplitudes) are not only equally successful (even that would be beneficial because of the substantially reduced logistics installing half the sensors) but also statistically inversions for some source properties are almost always improved. This can be attributed to the fact that the (in particular vertical) gradient information is contained in the additional motion components. We compare these effects for strike-slip and normal-faulting type sources and confirm that the increase in inversion quality for kinematic source parameters is
Compton scattering at high intensities
Energy Technology Data Exchange (ETDEWEB)
Heinzl, Thomas, E-mail: thomas.heinzl@plymouth.ac.u [University of Plymouth, School of Mathematics and Statistics, Drake Circus, Plymouth PL4 8AA (United Kingdom)
2009-12-01
High-intensity Compton scattering takes place when an electron beam is brought into collision with a high power laser. We briefly review the main intensity signatures using the formalism of strong-field quantum electrodynamics.
Compton suppression gamma ray spectrometry
International Nuclear Information System (INIS)
Landsberger, S.; Iskander, F.Y.; Niset, M.; Heydorn, K.
2002-01-01
In the past decade there have been many studies to use Compton suppression methods in routine neutron activation analysis as well as in the traditional role of low level gamma ray counting of environmental samples. On a separate path there have been many new PC based software packages that have been developed to enhance photopeak fitting. Although the newer PC based algorithms have had significant improvements, they still suffer from being effectively used in weak gamma ray lines in natural samples or in neutron activated samples that have very high Compton backgrounds. We have completed a series of experiments to show the usefulness of Compton suppression. As well we have shown the pitfalls when using Compton suppression methods for high counting deadtimes as in the case of neutron activated samples. We have also investigated if counting statistics are the same both suppressed and normal modes. Results are presented in four separate experiments. (author)
Proceedings of the Fourth Compton Symposium. Proceedings
International Nuclear Information System (INIS)
Dermer, C.D.; Strickman, M.S.; Kurfess, J.D.
1997-01-01
These proceedings represent the papers presented at the Fourth Compton Symposium held in Williamsburg, Virginia in April, 1997. This symposium gives the latest development in gamma ray astronomy and summarizes the results obtained by the Compton Gamma Ray Observatory. One of the missions of the Observatory has been the study of physical processes taking place in the most dynamic sites in the Universe, including supernovae, novae, pulsars, black holes, active galaxies, and gamma-ray bursts. The energies covered range from hard X-ray to gamma-ray regions from 15 KeV to 30 GeV. The Burst and Transient Experiment (BASTE) measures brightness variations in gamma-ray bursts and solar flares. The Oriented Scintillation Spectroscopy Experiment (OSSE), measures spectral output of astrophysical sources in the 0.05 to 10 MeV range. The Imaging Compton Telescope (COMPTEL) detects gamma-rays and performs sky survey in the energy range 1 to 30 MeV. The Energetic Gamma Ray Experiment Telescope (EGRET) covers the broadest energy range from 20 MeV to 30 GeV. The papers presented result from all of the above. There were 249 papers presented and out of these, 6 have been abstracted for the Energy, Science and Technology database
Weak Deeply Virtual Compton Scattering
International Nuclear Information System (INIS)
Ales Psaker; Wolodymyr Melnitchouk; Anatoly Radyushkin
2006-01-01
We extend the analysis of the deeply virtual Compton scattering process to the weak interaction sector in the generalized Bjorken limit. The virtual Compton scattering amplitudes for the weak neutral and charged currents are calculated at the leading twist within the framework of the nonlocal light-cone expansion via coordinate space QCD string operators. Using a simple model, we estimate cross sections for neutrino scattering off the nucleon, relevant for future high intensity neutrino beam facilities
An inverse-source problem for maximization of pore-fluid oscillation within poroelastic formations
Jeong, C.; Kallivokas, L. F.
2016-01-01
This paper discusses a mathematical and numerical modeling approach for identification of an unknown optimal loading time signal of a wave source, atop the ground surface, that can maximize the relative wave motion of a single-phase pore fluid within fluid-saturated porous permeable (poroelastic) rock formations, surrounded by non-permeable semi-infinite elastic solid rock formations, in a one-dimensional setting. The motivation stems from a set of field observations, following seismic events and vibrational tests, suggesting that shaking an oil reservoir is likely to improve oil production rates. This maximization problem is cast into an inverse-source problem, seeking an optimal loading signal that minimizes an objective functional – the reciprocal of kinetic energy in terms of relative pore-fluid wave motion within target poroelastic layers. We use the finite element method to obtain the solution of the governing wave physics of a multi-layered system, where the wave equations for the target poroelastic layers and the elastic wave equation for the surrounding non-permeable layers are coupled with each other. We use a partial-differential-equation-constrained-optimization framework (a state-adjoint-control problem approach) to tackle the minimization problem. The numerical results show that the numerical optimizer recovers optimal loading signals, whose dominant frequencies correspond to amplification frequencies, which can also be obtained by a frequency sweep, leading to larger amplitudes of relative pore-fluid wave motion within the target hydrocarbon formation than other signals.
An inverse-source problem for maximization of pore-fluid oscillation within poroelastic formations
Jeong, C.
2016-07-04
This paper discusses a mathematical and numerical modeling approach for identification of an unknown optimal loading time signal of a wave source, atop the ground surface, that can maximize the relative wave motion of a single-phase pore fluid within fluid-saturated porous permeable (poroelastic) rock formations, surrounded by non-permeable semi-infinite elastic solid rock formations, in a one-dimensional setting. The motivation stems from a set of field observations, following seismic events and vibrational tests, suggesting that shaking an oil reservoir is likely to improve oil production rates. This maximization problem is cast into an inverse-source problem, seeking an optimal loading signal that minimizes an objective functional – the reciprocal of kinetic energy in terms of relative pore-fluid wave motion within target poroelastic layers. We use the finite element method to obtain the solution of the governing wave physics of a multi-layered system, where the wave equations for the target poroelastic layers and the elastic wave equation for the surrounding non-permeable layers are coupled with each other. We use a partial-differential-equation-constrained-optimization framework (a state-adjoint-control problem approach) to tackle the minimization problem. The numerical results show that the numerical optimizer recovers optimal loading signals, whose dominant frequencies correspond to amplification frequencies, which can also be obtained by a frequency sweep, leading to larger amplitudes of relative pore-fluid wave motion within the target hydrocarbon formation than other signals.
Jiang, Daijun; Li, Zhiyuan; Liu, Yikan; Yamamoto, Masahiro
2017-05-01
In this paper, we first establish a weak unique continuation property for time-fractional diffusion-advection equations. The proof is mainly based on the Laplace transform and the unique continuation properties for elliptic and parabolic equations. The result is weaker than its parabolic counterpart in the sense that we additionally impose the homogeneous boundary condition. As a direct application, we prove the uniqueness for an inverse problem on determining the spatial component in the source term by interior measurements. Numerically, we reformulate our inverse source problem as an optimization problem, and propose an iterative thresholding algorithm. Finally, several numerical experiments are presented to show the accuracy and efficiency of the algorithm.
Directory of Open Access Journals (Sweden)
Shoubin Wang
2017-01-01
Full Text Available The compound variable inverse problem which comprises boundary temperature distribution and surface convective heat conduction coefficient of two-dimensional steady heat transfer system with inner heat source is studied in this paper applying the conjugate gradient method. The introduction of complex variable to solve the gradient matrix of the objective function obtains more precise inversion results. This paper applies boundary element method to solve the temperature calculation of discrete points in forward problems. The factors of measuring error and the number of measuring points zero error which impact the measurement result are discussed and compared with L-MM method in inverse problems. Instance calculation and analysis prove that the method applied in this paper still has good effectiveness and accuracy even if measurement error exists and the boundary measurement points’ number is reduced. The comparison indicates that the influence of error on the inversion solution can be minimized effectively using this method.
X-ray generator based on Compton scattering
Androsov, V.P.; Agafonov, A.V.; Botman, J.I.M.; Bulyak, E.V.; Drebot, I.; Gladkikh, P.I.; Grevtsev, V.; Ivashchenko, V.; Karnaukhov, I.M.; Lapshin, V.I.
2005-01-01
Nowadays, the sources of the X-rays based on a storage ring with low beam energy and Compton scattering of intense laser beam are under development in several laboratories. In the paper the state-of-art in development and construction of cooperative project of a Kharkov advanced X-ray source NESTOR
Joint Source Location and Focal Mechanism Inversion: efficiency, accuracy and applications
Liang, C.; Yu, Y.
2017-12-01
The analysis of induced seismicity has become a common practice to evaluate the results of hydraulic fracturing treatment. Liang et al (2016) proposed a joint Source Scanning Algorithms (jSSA for short) to obtain microseismic events and focal mechanisms simultaneously. The jSSA is superior over traditional SSA in many aspects, but the computation cost is too significant to be applied in real time monitoring. In this study, we have developed several scanning schemas to reduce computation time. A multi-stage scanning schema is proved to be able to improve the efficiency significantly while also retain its accuracy. A series of tests have been carried out by using both real field data and synthetic data to evaluate the accuracy of the method and its dependence on noise level, source depths, focal mechanisms and other factors. The surface-based arrays have better constraints on horizontal location errors (0.5). For sources with varying rakes, dips, strikes and depths, the errors are mostly controlled by the partition of positive and negative polarities in different quadrants. More evenly partitioned polarities in different quadrants yield better results in both locations and focal mechanisms. Nevertheless, even with bad resolutions for some FMs, the optimized jSSA method can still improve location accuracies significantly. Based on much more densely distributed events and focal mechanisms, a gridded stress inversion is conducted to get a evenly distributed stress field. The full potential of the jSSA has yet to be explored in different directions, especially in earthquake seismology as seismic array becoming incleasingly dense.
Reconstructed Image Spatial Resolution of Multiple Coincidences Compton Imager
Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna
2010-02-01
We study the multiple coincidences Compton imager (MCCI) which is based on a simultaneous acquisition of several photons emitted in cascade from a single nuclear decay. Theoretically, this technique should provide a major improvement in localization of a single radioactive source as compared to a standard Compton camera. In this work, we investigated the performance and limitations of MCCI using Monte Carlo computer simulations. Spatial resolutions of the reconstructed point source have been studied as a function of the MCCI parameters, including geometrical dimensions and detector characteristics such as materials, energy and spatial resolutions.
Compton-thick AGN at high and low redshift
Akylas, A.; Georgantopoulos, I.; Corral, A.; Ranalli, P.; Lanzuisi, G.
2017-10-01
The most obscured sources detected in X-ray surveys, the Compton-thick AGN present great interest both because they represent the hidden side of accretion but also because they may signal the AGN birth. We analyse the NUSTAR observations from the serendipitous observations in order to study the Compton-thick AGN at the deepest possible ultra-hard band (>10 keV). We compare our results with our SWIFT/BAT findings in the local Universe, as well as with our results in the CDFS and COSMOS fields. We discuss the comparison with X-ray background synthesis models finding that a low fraction of Compton-thick sources (about 15 per cent of the obscured population) is compatible with both the 2-10keV band results and those at harder energies.
1D inversion and analysis of marine controlled-source EM data
DEFF Research Database (Denmark)
Christensen, N.B.; Dodds, Kevin; Bulley, Ian
2006-01-01
been displaced by resistive oil or gas. We present preliminary results from an investigation of the applicability of one-dimensional inversion of the data. A noise model for the data set is developed and inversion is carried out with multi-layer models and 4-layer models. For the data set in question...
The development of a Compton lung densitometer
Energy Technology Data Exchange (ETDEWEB)
Loo, B.W.; Goulding, F.S.; Madden, N.W.; Simon, D.S.
1988-11-01
A field instrument is being developed for the non-invasive determination of absolute lung density using unique Compton backscattering techniques. A system consisting of a monoenergetic gamma-ray beam and a shielded high resolution high-purity-germanium (HPGe) detector in a close-coupled geometry is designed to minimize errors due to multiple scattering and uncontrollable attenuation in the chestwall. Results of studies on system performance with phantoms, the optimization of detectors, and the fabrication of a practical gamma-ray source are presented. 3 refs., 6 figs., 2 tabs.
The development of a Compton lung densitometer
International Nuclear Information System (INIS)
Loo, B.W.; Goulding, F.S.; Madden, N.W.; Simon, D.S.
1988-11-01
A field instrument is being developed for the non-invasive determination of absolute lung density using unique Compton backscattering techniques. A system consisting of a monoenergetic gamma-ray beam and a shielded high resolution high-purity-germanium (HPGe) detector in a close-coupled geometry is designed to minimize errors due to multiple scattering and uncontrollable attenuation in the chestwall. Results of studies on system performance with phantoms, the optimization of detectors, and the fabrication of a practical gamma-ray source are presented. 3 refs., 6 figs., 2 tabs
DEFF Research Database (Denmark)
Oh, Geok Lian
properties such as the elastic wave speeds and soil densities. One processing method is casting the estimation problem into an inverse problem to solve for the unknown material parameters. The forward model for the seismic signals used in the literatures include ray tracing methods that consider only...... density values of the discretized ground medium, which leads to time-consuming computations and instability behaviour of the inversion process. In addition, the geophysics inverse problem is generally ill-posed due to non-exact forward model that introduces errors. The Bayesian inversion method through...... the first arrivals of the reflected compressional P-waves from the subsurface structures, or 3D elastic wave models that model all the seismic wave components. The ray tracing forward model formulation is linear, whereas the full 3D elastic wave model leads to a nonlinear inversion problem. In this Ph...
International Nuclear Information System (INIS)
Loureiro, Cesar Augusto Domingues; Santos, Adimir dos
2009-01-01
In reactor physics tests which are performed at the startup after refueling the commercial PWRs, it is important to monitor subcriticality continuously during criticality approach. Reactivity measurements by the inverse kinetics method are widely used during the operation of a nuclear reactor and it is possible to perform an online reactivity measurement based on the point reactor kinetics equations. This technique is successful applied at sufficiently high power level or to a core without an external neutron source where the neutron source term in point reactor kinetics equations may be neglected. For operation at low power levels, the contribution of the neutron source must be taken into account and this implies the knowledge of a quantity proportional to the source strength, and then it should be determined. Some experiments have been performed in the IPEN/MB-01 Research Reactor for the determination of the Source Term, using the Least Square Inverse Kinetics Method (LSIKM). A digital reactivity meter which neglects the source term is used to calculate the reactivity and then the source term can be determined by the LSIKM. After determining the source term, its value can be added to the algorithm and the reactivity can be determined again, considering the source term. The new digital reactivity meter can be used now to monitor reactivity during the criticality approach and the measured value for the reactivity is more precise than the meter which neglects the source term. (author)
International Nuclear Information System (INIS)
Chimera, G.; Aoudia, A.; Panza, G.F.; Sarao, A.
2002-06-01
We make a multiscale investigation of the lithosphere-asthenosphere structure and of the active tectonics along a stripe from the Tyrrhenian to the Adriatic coast, with emphasis on the Umbria-Marche area, by means of surface-wave tomography and inversion experiments for structure and seismic moment tensor retrieval. The data include: a large number of new local and regional group velocity measurements sampling the Umbria-Marche Apennines and the Adria margin respectively; new and published phase velocity measurements sampling Italy and surroundings; deep seismic soundings which, crossing the whole Peninsula from the Tyrrhenian to the Adriatic coasts, go through the Umbria-Marche area. The local group velocity maps cover the area reactivated by the 1997-1998 Umbria-Marche earthquake sequence. These maps suggest an intimate relation between the lateral variations and distribution of the active fault systems and related sedimentary basins. Such relation is confirmed by the non-linear inversion of the local dispersion curves. To image the structure of the lithosphere-asthenosphere system from the Tyrrhenian to the Adriatic coast, we fix the upper crust parameters consistently with our Umbria-Marche models and with pertinent deep seismic sounding data and invert the regional long period dispersion measurements. At a local scale, in the Umbria-Marche area, the retrieved models for the upper crust reveal the importance of the inherited compressional tectonics on the ongoing extensional deformation and related seismic activity. The lateral and in-depth structural changes in the upper crust are likely controlling fault segmentation and seismogenesis. Source inversion studies of the large crustal events of the 1997 earthquake sequence show the dominance of normal faulting mechanisms, whereas selected aftershocks between the fault segments reveal that the prevailing deformation at the step-over is of strike-slip faulting type and may control the lateral fault extent. At the
Critical review of Compton imaging
International Nuclear Information System (INIS)
Guzzardi, R.; Licitra, G.
1987-01-01
This paper reviews the basic aspects, problems, and applications of Compton imaging including those related to nonmedical applications. The physics and technology at the base of this specific methodology are analyzed and the relative differences and merits with respect to other imaging techniques, using ionizing radiations, are reviewed. The basic Compton imaging approaches, i.e., point-by-point, line-by-line, and plane-by-plane, are analyzed. Specifically, physical design and technological aspects are reviewed and discussed. Furthermore, the most important clinical applications of the different methods are presented and discussed. Finally, possibilities and applications of the Compton imaging method to other nonmedical fields, as in the case of the important area of object defects recognition, are analyzed and reviewed. 56 references
Experimental and theoretical Compton profiles of Be, C and Al
Energy Technology Data Exchange (ETDEWEB)
Aguiar, Julio C., E-mail: jaguiar@arn.gob.a [Autoridad Regulatoria Nuclear, Av. Del Libertador 8250, C1429BNP, Buenos Aires (Argentina); Instituto de Fisica ' Arroyo Seco' , Facultad de Ciencias Exactas, U.N.C.P.B.A., Pinto 399, 7000 Tandil (Argentina); Di Rocco, Hector O. [Instituto de Fisica ' Arroyo Seco' , Facultad de Ciencias Exactas, U.N.C.P.B.A., Pinto 399, 7000 Tandil (Argentina); Arazi, Andres [Laboratorio TANDAR, Comision Nacional de Energia Atomica, Av. General Paz 1499, 1650 San Martin, Buenos Aires (Argentina)
2011-02-01
The results of Compton profile measurements, Fermi momentum determinations, and theoretical values obtained from a linear combination of Slater-type orbital (STO) for core electrons in beryllium; carbon and aluminium are presented. In addition, a Thomas-Fermi model is used to estimate the contribution of valence electrons to the Compton profile. Measurements were performed using monoenergetic photons of 59.54 keV provided by a low-intensity Am-241 {gamma}-ray source. Scattered photons were detected at 90{sup o} from the beam direction using a p-type coaxial high-purity germanium detector (HPGe). The experimental results are in good agreement with theoretical calculations.
Analysis of materials in ducts by Compton scattering
International Nuclear Information System (INIS)
Gouveia, M.A.G.; Lopes, R.T.; Jesus, E.F.O. de; Camerini, C.S.
2000-01-01
This work presents the use of the Compton Scattering Technique as essay, for materials characterization in petroleum ducts. The essay have been accomplished in laboratory ambit, so that the presented results should be analyzed so that the system can come to be used in the field. The inspection was performed using Compton Scattering techniques, with two detectors aligned, in an angle of 90 degrees with a source of Cs-137 with energy of 662 keV. The results demonstrated the good capacity of the system to detect materials deposited in petroleum ducts during petroleum transportation. (author)
Waveform inversion of lateral velocity variation from wavefield source location perturbation
Choi, Yun Seok
2013-09-22
It is challenge in waveform inversion to precisely define the deep part of the velocity model compared to the shallow part. The lateral velocity variation, or what referred to as the derivative of velocity with respect to the horizontal distance, with well log data can be used to update the deep part of the velocity model more precisely. We develop a waveform inversion algorithm to obtain the lateral velocity variation by inverting the wavefield variation associated with the lateral shot location perturbation. The gradient of the new waveform inversion algorithm is obtained by the adjoint-state method. Our inversion algorithm focuses on resolving the lateral changes of the velocity model with respect to a fixed reference vertical velocity profile given by a well log. We apply the method on a simple-dome model to highlight the methods potential.
Waveform inversion of lateral velocity variation from wavefield source location perturbation
Choi, Yun Seok; Alkhalifah, Tariq Ali
2013-01-01
It is challenge in waveform inversion to precisely define the deep part of the velocity model compared to the shallow part. The lateral velocity variation, or what referred to as the derivative of velocity with respect to the horizontal distance
Efficient full waveform inversion using the excitation representation of the source wavefield
Kalita, Mahesh; Alkhalifah, Tariq Ali
2017-01-01
Full waveform inversion (FWI) is an iterative method of data-fitting, aiming at high-resolution recovery of the unknown model parameters. However, its conventional implementation is a cumbersome process, requiring a long computational time and large
On a low intensity 241 Am Compton spectrometer for measurement ...
Indian Academy of Sciences (India)
In this paper, a new design and construction of a low intensity (100 mCi) 241Am -ray Compton spectrometer is presented. The planar spectrometer is based on a small disc source with the shortest geometry. Measurement of the momentum density of polycrystalline Al is used to evaluate the performance of the new design.
A dual purpose Compton suppression spectrometer
Parus, J; Raab, W; Donohue, D
2003-01-01
A gamma-ray spectrometer with a passive and an active shield is described. It consists of a HPGe coaxial detector of 42% efficiency and 4 NaI(Tl) detectors. The energy output pulses of the Ge detector are delivered into the 3 spectrometry chains giving the normal, anti- and coincidence spectra. From the spectra of a number of sup 1 sup 3 sup 7 Cs and sup 6 sup 0 Co sources a Compton suppression factor, SF and a Compton reduction factor, RF, as the parameters characterizing the system performance, were calculated as a function of energy and source activity and compared with those given in literature. The natural background is reduced about 8 times in the anticoincidence mode of operation, compared to the normal spectrum which results in decreasing the detection limits for non-coincident gamma-rays up to a factor of 3. In the presence of other gamma-ray activities, in the range from 5 to 11 kBq, non- and coincident, the detection limits can be decreased for some nuclides by a factor of 3 to 5.7.
Recent results from the Compton Observatory
Energy Technology Data Exchange (ETDEWEB)
Michelson, P.F.; Hansen, W.W. [Stanford Univ., CA (United States)
1994-12-01
The Compton Observatory is an orbiting astronomical observatory for gamma-ray astronomy that covers the energy range from about 30 keV to 30 GeV. The Energetic Gamma Ray Experiment Telescope (EGRET), one of four instruments on-board, is capable of detecting and imaging gamma radiation from cosmic sources in the energy range from approximately 20 MeV to 30 GeV. After about one month of tests and calibration following the April 1991 launch, a 15-month all sky survey was begun. This survey is now complete and the Compton Observatory is well into Phase II of its observing program which includes guest investigator observations. Among the highlights from the all-sky survey discussed in this presentation are the following: detection of five pulsars with emission above 100 MeV; detection of more than 24 active galaxies, the most distant at redshift greater than two; detection of many high latitude, unidentified gamma-ray sources, some showing significant time variability; detection of at least two high energy gamma-ray bursts, with emission in one case extending to at least 1 GeV. EGRET has also detected gamma-ray emission from solar flares up to energies of at least 2 GeV and has observed gamma-rays from the Large Magellanic Cloud.
DEFF Research Database (Denmark)
Annuar, A.; Gandhi, P.; Alexander, D. M.
2015-01-01
We present two Nuclear Spectroscopic Telescope Array (NuSTAR) observations of the local Seyfert 2 active galactic nucleus (AGN) and an ultraluminous X-ray source (ULX) candidate in NGC 5643. Together with archival data from Chandra, XMM-Newton, and Swift-BAT, we perform a high-quality broadband s...
DEFF Research Database (Denmark)
Oh, Geok Lian; Brunskog, Jonas
2014-01-01
Techniques have been studied for the localization of an underground source with seismic interrogation signals. Much of the work has involved defining either a P-wave acoustic model or a dispersive surface wave model to the received signal and applying the time-delay processing technique and frequ...... that for field data, inversion for localization is most advantageous when the forward model completely describe all the elastic wave components as is the case of the FDTD 3D elastic model....
PET-COMPTON System. Comparative evaluation with PET System using Monte Carlo Simulation
International Nuclear Information System (INIS)
Diaz Garcia, Angelina; Arista Romeu, Eduardo; Abreu Alfonso, Yamiel; Leyva Fabelo, Antonio; Pinnera HernAndez, Ibrahin; Bolannos Perez, Lourdes; Rubio Rodriguez, Juan A.; Perez Morales, Jose M.; Arce Dubois, Pedro; Vela Morales, Oscar; Willmott Zappacosta, Carlos
2012-01-01
Positron Emission Tomography (PET) in small animals has actually achieved spatial resolution round about 1 mm and currently there are under study different approaches to improve this spatial resolution. One of them combines PET technology with Compton Cameras. This paper presents the idea of the so called PET-Compton systems and has included comparative evaluation of spatial resolution and global efficiency in both PET and PET-Compton system by means of Monte Carlo simulations using Geant4 code. Simulation was done on a PET-Compton system made-up of LYSO-LuYAP scintillating detectors of particular small animal PET scanner named Clear-PET and for Compton detectors based on CdZnTe semiconductor. A group of radionuclides that emits a positron (e+) and quantum almost simultaneously and fulfills some selection criteria for their possible use in PET-Compton systems for medical and biological applications were studied under simulation conditions. By means of analytical reconstruction using SSRB (Single Slide Rebinning) method were obtained superior spatial resolution in PET-Compton system for all tested radionuclides (reaching sub-millimeter values of for 22Na source). However this analysis done by simulation have shown limited global efficiency values in PET-Compton system (in the order of 10 -5 -10 -6 %) instead of values around 5*10 -1 % that have been achieved in PET system. (author)
Directory of Open Access Journals (Sweden)
M. Kopacz
2010-02-01
Full Text Available We combine CO column measurements from the MOPITT, AIRS, SCIAMACHY, and TES satellite instruments in a full-year (May 2004–April 2005 global inversion of CO sources at 4°×5° spatial resolution and monthly temporal resolution. The inversion uses the GEOS-Chem chemical transport model (CTM and its adjoint applied to MOPITT, AIRS, and SCIAMACHY. Observations from TES, surface sites (NOAA/GMD, and aircraft (MOZAIC are used for evaluation of the a posteriori solution. Using GEOS-Chem as a common intercomparison platform shows global consistency between the different satellite datasets and with the in situ data. Differences can be largely explained by different averaging kernels and a priori information. The global CO emission from combustion as constrained in the inversion is 1350 Tg a^{−1}. This is much higher than current bottom-up emission inventories. A large fraction of the correction results from a seasonal underestimate of CO sources at northern mid-latitudes in winter and suggests a larger-than-expected CO source from vehicle cold starts and residential heating. Implementing this seasonal variation of emissions solves the long-standing problem of models underestimating CO in the northern extratropics in winter-spring. A posteriori emissions also indicate a general underestimation of biomass burning in the GFED2 inventory. However, the tropical biomass burning constraints are not quantitatively consistent across the different datasets.
The hydrogen anomaly problem in neutron Compton scattering
Karlsson, Erik B.
2018-03-01
Neutron Compton scattering (also called ‘deep inelastic scattering of neutrons’, DINS) is a method used to study momentum distributions of light atoms in solids and liquids. It has been employed extensively since the start-up of intense pulsed neutron sources about 25 years ago. The information lies primarily in the width and shape of the Compton profile and not in the absolute intensity of the Compton peaks. It was therefore not immediately recognized that the relative intensities of Compton peaks arising from scattering on different isotopes did not always agree with values expected from standard neutron cross-section tables. The discrepancies were particularly large for scattering on protons, a phenomenon that became known as ‘the hydrogen anomaly problem’. The present paper is a review of the discovery, experimental tests to prove or disprove the existence of the hydrogen anomaly and discussions concerning its origin. It covers a twenty-year-long history of experimentation, theoretical treatments and discussions. The problem is of fundamental interest, since it involves quantum phenomena on the subfemtosecond time scale, which are not visible in conventional thermal neutron scattering but are important in Compton scattering where neutrons have two orders of magnitude times higher energy. Different H-containing systems show different cross-section deficiencies and when the scattering processes are followed on the femtosecond time scale the cross-section losses disappear on different characteristic time scales for each H-environment. The last section of this review reproduces results from published papers based on quantum interference in scattering on identical particles (proton or deuteron pairs or clusters), which have given a quantitative theoretical explanation both regarding the H-cross-section reduction and its time dependence. Some new explanations are added and the concluding chapter summarizes the conditions for observing the specific quantum
Micro-seismic Imaging Using a Source Independent Waveform Inversion Method
Wang, Hanchen
2016-01-01
waveform inversion (FWI) is widely used. The FWI method updates the velocity model by minimizing the misfit between the observed data and the predicted data. Using FWI to locate and image microseismic events allows for an automatic process (free of picking
Czech Academy of Sciences Publication Activity Database
Anikiev, D.; Valenta, Jan; Staněk, František; Eisner, Leo
2014-01-01
Roč. 198, č. 1 (2014), s. 249-258 ISSN 0956-540X R&D Projects: GA ČR GAP210/12/2451 Institutional support: RVO:67985891 Keywords : inverse theory * probability distributions * wave scattering and diffraction * fractures and faults Subject RIV: DB - Geology ; Mineralogy Impact factor: 2.724, year: 2013
A Compton camera application for the GAMOS GEANT4-based framework
Energy Technology Data Exchange (ETDEWEB)
Harkness, L.J., E-mail: ljh@ns.ph.liv.ac.uk [Oliver Lodge Laboratory, The University of Liverpool, Liverpool L69 7ZE (United Kingdom); Arce, P. [Department of Basic Research, CIEMAT, Madrid (Spain); Judson, D.S.; Boston, A.J.; Boston, H.C.; Cresswell, J.R.; Dormand, J.; Jones, M.; Nolan, P.J.; Sampson, J.A.; Scraggs, D.P.; Sweeney, A. [Oliver Lodge Laboratory, The University of Liverpool, Liverpool L69 7ZE (United Kingdom); Lazarus, I.; Simpson, J. [STFC Daresbury Laboratory, Daresbury, Warrington WA4 4AD (United Kingdom)
2012-04-11
Compton camera systems can be used to image sources of gamma radiation in a variety of applications such as nuclear medicine, homeland security and nuclear decommissioning. To locate gamma-ray sources, a Compton camera employs electronic collimation, utilising Compton kinematics to reconstruct the paths of gamma rays which interact within the detectors. The main benefit of this technique is the ability to accurately identify and locate sources of gamma radiation within a wide field of view, vastly improving the efficiency and specificity over existing devices. Potential advantages of this imaging technique, along with advances in detector technology, have brought about a rapidly expanding area of research into the optimisation of Compton camera systems, which relies on significant input from Monte-Carlo simulations. In this paper, the functionality of a Compton camera application that has been integrated into GAMOS, the GEANT4-based Architecture for Medicine-Oriented Simulations, is described. The application simplifies the use of GEANT4 for Monte-Carlo investigations by employing a script based language and plug-in technology. To demonstrate the use of the Compton camera application, simulated data have been generated using the GAMOS application and acquired through experiment for a preliminary validation, using a Compton camera configured with double sided high purity germanium strip detectors. Energy spectra and reconstructed images for the data sets are presented.
Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager
Energy Technology Data Exchange (ETDEWEB)
Lowell, A. W.; Boggs, S. E; Chiu, C. L.; Kierans, C. A.; Sleator, C.; Tomsick, J. A.; Zoglauer, A. C. [Space Sciences Laboratory, University of California, Berkeley (United States); Chang, H.-K.; Tseng, C.-H.; Yang, C.-Y. [Institute of Astronomy, National Tsing Hua University, Taiwan (China); Jean, P.; Ballmoos, P. von [IRAP Toulouse (France); Lin, C.-H. [Institute of Physics, Academia Sinica, Taiwan (China); Amman, M. [Lawrence Berkeley National Laboratory (United States)
2017-10-20
Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ∼21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. We find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.
Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada
Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.
2015-08-01
Inversion models can use atmospheric concentration measurements to estimate surface fluxes. This study is an evaluation of the errors in a regional flux inversion model for different provinces of Canada, Alberta (AB), Saskatchewan (SK) and Ontario (ON). Using CarbonTracker model results as the target, the synthetic data experiment analyses examined the impacts of the errors from the Bayesian optimisation method, prior flux distribution and the atmospheric transport model, as well as their interactions. The scaling factors for different sub-regions were estimated by the Markov chain Monte Carlo (MCMC) simulation and cost function minimization (CFM) methods. The CFM method results are sensitive to the relative size of the assumed model-observation mismatch and prior flux error variances. Experiment results show that the estimation error increases with the number of sub-regions using the CFM method. For the region definitions that lead to realistic flux estimates, the numbers of sub-regions for the western region of AB/SK combined and the eastern region of ON are 11 and 4 respectively. The corresponding annual flux estimation errors for the western and eastern regions using the MCMC (CFM) method are -7 and -3 % (0 and 8 %) respectively, when there is only prior flux error. The estimation errors increase to 36 and 94 % (40 and 232 %) resulting from transport model error alone. When prior and transport model errors co-exist in the inversions, the estimation errors become 5 and 85 % (29 and 201 %). This result indicates that estimation errors are dominated by the transport model error and can in fact cancel each other and propagate to the flux estimates non-linearly. In addition, it is possible for the posterior flux estimates having larger differences than the prior compared to the target fluxes, and the posterior uncertainty estimates could be unrealistically small that do not cover the target. The systematic evaluation of the different components of the inversion
Schmitt, R. J. P.; Bizzi, S.; Kondolf, G. M.; Rubin, Z.; Castelletti, A.
2016-12-01
Field and laboratory evidence indicates that the spatial distribution of transport in both alluvial and bedrock rivers is an adaptation to sediment supply. Sediment supply, in turn, depends on spatial distribution and properties (e.g., grain sizes and supply rates) of individual sediment sources. Analyzing the distribution of transport capacity in a river network could hence clarify the spatial distribution and properties of sediment sources. Yet, challenges include a) identifying magnitude and spatial distribution of transport capacity for each of multiple grain sizes being simultaneously transported, and b) estimating source grain sizes and supply rates, both at network scales. Herein, we approach the problem of identifying the spatial distribution of sediment sources and the resulting network sediment fluxes in a major, poorly monitored tributary (80,000 km2) of the Mekong. Therefore, we apply the CASCADE modeling framework (Schmitt et al. (2016)). CASCADE calculates transport capacities and sediment fluxes for multiple grainsizes on the network scale based on remotely-sensed morphology and modelled hydrology. CASCADE is run in an inverse Monte Carlo approach for 7500 random initializations of source grain sizes. In all runs, supply of each source is inferred from the minimum downstream transport capacity for the source grain size. Results for each realization are compared to sparse available sedimentary records. Only 1 % of initializations reproduced the sedimentary record. Results for these realizations revealed a spatial pattern in source supply rates, grain sizes, and network sediment fluxes that correlated well with map-derived patterns in lithology and river-morphology. Hence, we propose that observable river hydro-morphology contains information on upstream source properties that can be back-calculated using an inverse modeling approach. Such an approach could be coupled to more detailed models of hillslope processes in future to derive integrated models
Karve, Pranav M.
2016-12-28
We discuss a methodology for computing the optimal spatio-temporal characteristics of surface wave sources necessary for delivering wave energy to a targeted subsurface formation. The wave stimulation is applied to the target formation to enhance the mobility of particles trapped in its pore space. We formulate the associated wave propagation problem for three-dimensional, heterogeneous, semi-infinite, elastic media. We use hybrid perfectly matched layers at the truncation boundaries of the computational domain to mimic the semi-infiniteness of the physical domain of interest. To recover the source parameters, we define an inverse source problem using the mathematical framework of constrained optimization and resolve it by employing a reduced-space approach. We report the results of our numerical experiments attesting to the methodology\\'s ability to specify the spatio-temporal description of sources that maximize wave energy delivery. Copyright © 2016 John Wiley & Sons, Ltd.
The Compton polarimeter at ELSA
International Nuclear Information System (INIS)
Doll, D.
1998-06-01
In order to measure the degree of transverse polarization of the stored electron beam in the Electron Stretcher Accelerator ELSA a compton polarimeter is built up. The measurement is based on the polarization dependent cross section for the compton scattering of circular polarized photons off polarized electrons. Using a high power laser beam and detecting the scattered photons a measuring time of two minutes with a statistical error of 5% is expected from numerical simulations. The design and the results of a computer controlled feedback system to enhance the laser beam stability at the interaction point in ELSA are presented. The detection of the scattered photons is based on a lead converter and a silicon-microstrip detector. The design and test results of the detector module including readout electronic and computer control are discussed. (orig.)
Sequential Bayesian geoacoustic inversion for mobile and compact source-receiver configuration.
Carrière, Olivier; Hermand, Jean-Pierre
2012-04-01
Geoacoustic characterization of wide areas through inversion requires easily deployable configurations including free-drifting platforms, underwater gliders and autonomous vehicles, typically performing repeated transmissions during their course. In this paper, the inverse problem is formulated as sequential Bayesian filtering to take advantage of repeated transmission measurements. Nonlinear Kalman filters implement a random-walk model for geometry and environment and an acoustic propagation code in the measurement model. Data from MREA/BP07 sea trials are tested consisting of multitone and frequency-modulated signals (bands: 0.25-0.8 and 0.8-1.6 kHz) received on a shallow vertical array of four hydrophones 5-m spaced drifting over 0.7-1.6 km range. Space- and time-coherent processing are applied to the respective signal types. Kalman filter outputs are compared to a sequence of global optimizations performed independently on each received signal. For both signal types, the sequential approach is more accurate but also more efficient. Due to frequency diversity, the processing of modulated signals produces a more stable tracking. Although an extended Kalman filter provides comparable estimates of the tracked parameters, the ensemble Kalman filter is necessary to properly assess uncertainty. In spite of mild range dependence and simplified bottom model, all tracked geoacoustic parameters are consistent with high-resolution seismic profiling, core logging P-wave velocity, and previous inversion results with fixed geometries.
Kovalets, Ivan V.; Efthimiou, George C.; Andronopoulos, Spyros; Venetsanos, Alexander G.; Argyropoulos, Christos D.; Kakosimos, Konstantinos E.
2018-05-01
In this work, we present an inverse computational method for the identification of the location, start time, duration and quantity of emitted substance of an unknown air pollution source of finite time duration in an urban environment. We considered a problem of transient pollutant dispersion under stationary meteorological fields, which is a reasonable assumption for the assimilation of available concentration measurements within 1 h from the start of an incident. We optimized the calculation of the source-receptor function by developing a method which requires integrating as many backward adjoint equations as the available measurement stations. This resulted in high numerical efficiency of the method. The source parameters are computed by maximizing the correlation function of the simulated and observed concentrations. The method has been integrated into the CFD code ADREA-HF and it has been tested successfully by performing a series of source inversion runs using the data of 200 individual realizations of puff releases, previously generated in a wind tunnel experiment.
Kubo, H.; Asano, K.; Iwata, T.; Aoi, S.
2014-12-01
Previous studies for the period-dependent source characteristics of the 2011 Tohoku earthquake (e.g., Koper et al., 2011; Lay et al., 2012) were based on the short and long period source models using different method. Kubo et al. (2013) obtained source models of the 2011 Tohoku earthquake using multi period-bands waveform data by a common inversion method and discussed its period-dependent source characteristics. In this study, to achieve more in detail spatiotemporal source rupture behavior of this event, we introduce a new fault surface model having finer sub-fault size and estimate the source models in multi period-bands using a Bayesian inversion method combined with a multi-time-window method. Three components of velocity waveforms at 25 stations of K-NET, KiK-net, and F-net of NIED are used in this analysis. The target period band is 10-100 s. We divide this period band into three period bands (10-25 s, 25-50 s, and 50-100 s) and estimate a kinematic source model in each period band using a Bayesian inversion method with MCMC sampling (e.g., Fukuda & Johnson, 2008; Minson et al., 2013, 2014). The parameterization of spatiotemporal slip distribution follows the multi-time-window method (Hartzell & Heaton, 1983). The Green's functions are calculated by the 3D FDM (GMS; Aoi & Fujiwara, 1999) using a 3D velocity structure model (JIVSM; Koketsu et al., 2012). The assumed fault surface model is based on the Pacific plate boundary of JIVSM and is divided into 384 subfaults of about 16 * 16 km^2. The estimated source models in multi period-bands show the following source image: (1) First deep rupture off Miyagi at 0-60 s toward down-dip mostly radiating relatively short period (10-25 s) seismic waves. (2) Shallow rupture off Miyagi at 45-90 s toward up-dip with long duration radiating long period (50-100 s) seismic wave. (3) Second deep rupture off Miyagi at 60-105 s toward down-dip radiating longer period seismic waves then that of the first deep rupture. (4) Deep
International Nuclear Information System (INIS)
Okuyama, Shinichi; Matsuzawa, Taiju; Sera, Koichiro; Mishina, Hitoshi.
1982-01-01
Probability of tomographic fluoroscopy was investigated with an appropriately collimated monoenergetic gamma ray source and a gamma camera. Its clinical applications will be found in the orientation of paracenthesis, biopsy, endoscopy and removal of foreign bodies. The property that it gives positive images would enforce its usefulness. It will be useful in planning radiotherapy, too. Further expansion of radiological diagnosis is expectable if the technique is combined with direct magnification and contrast magnification with the Shinozaki color TV system. (author)
International Nuclear Information System (INIS)
Srinivas, C.V.; Rakesh, P.T.; Baskaran, R.; Venkatraman, B.
2018-01-01
Source term is an important input for consequence analysis using Decision Support Systems (DSS) to project radiological impact in the event of nuclear emergencies. A source term model called 'ASTER' is incorporated in the Online Nuclear Emergency Response System (ONERS) operational at Kalpakkam site for decision making during nuclear emergencies. This computes release rates using inverse method by employing an atmospheric dispersion model and gamma dose rates measured by environmental radiation monitors (ERM) deployed around the nuclear plant. The estimates may depend on the distribution of ERMs around the release location. In this work, data from various gamma monitors located at different radii 0.75 km and 1.5 km is used to assess the accuracy in the source term estimation for stack releases of MAPS-PHWR at Kalpakkam
Demonstration of improved seismic source inversion method of tele-seismic body wave
Yagi, Y.; Okuwaki, R.
2017-12-01
Seismic rupture inversion of tele-seismic body wave has been widely applied to studies of large earthquakes. In general, tele-seismic body wave contains information of overall rupture process of large earthquake, while the tele-seismic body wave is inappropriate for analyzing a detailed rupture process of M6 7 class earthquake. Recently, the quality and quantity of tele-seismic data and the inversion method has been greatly improved. Improved data and method enable us to study a detailed rupture process of M6 7 class earthquake even if we use only tele-seismic body wave. In this study, we demonstrate the ability of the improved data and method through analyses of the 2016 Rieti, Italy earthquake (Mw 6.2) and the 2016 Kumamoto, Japan earthquake (Mw 7.0) that have been well investigated by using the InSAR data set and the field observations. We assumed the rupture occurring on a single fault plane model inferred from the moment tensor solutions and the aftershock distribution. We constructed spatiotemporal discretized slip-rate functions with patches arranged as closely as possible. We performed inversions using several fault models and found that the spatiotemporal location of large slip-rate area was robust. In the 2016 Kumamoto, Japan earthquake, the slip-rate distribution shows that the rupture propagated to southwest during the first 5 s. At 5 s after the origin time, the main rupture started to propagate toward northeast. First episode and second episode correspond to rupture propagation along the Hinagu fault and the Futagawa fault, respectively. In the 2016 Rieti, Italy earthquake, the slip-rate distribution shows that the rupture propagated to up-dip direction during the first 2 s, and then rupture propagated toward northwest. From both analyses, we propose that the spatiotemporal slip-rate distribution estimated by improved inversion method of tele-seismic body wave has enough information to study a detailed rupture process of M6 7 class earthquake.
Compton radiography, 2. Clinical significance of Compton radiography of a chest phantom
Energy Technology Data Exchange (ETDEWEB)
Okuyama, S; Sera, K; Fukuda, H; Shishido, F [Tohoku Univ., Sendai (Japan). Research Inst. for Tuberculosis, Leprosy and Cancer; Mishina, H
1977-09-01
Compton radiography, a tomographic technic with Compton-scattered rays of a monochromatic gamma ray beam, was feasible of tomographing a chest phantom. The result suggested that the technic could be extended to imaging of the lung and the surrounding structures of the chest wall, mediastinum and liver in Compton tomographic mode.
Increase in compton scattering of gamma rays passing along metal surface
International Nuclear Information System (INIS)
Grigor'ev, A.N.; Bilyk, Z.V.; Sakun, A.V.; Marushchenko, V.V.; Chernyavskij, O.Yu.; Litvinov, Yu.V.
2014-01-01
The paper considers experimental study of changes in energy of 137 Cs gamma source as gamma rays pass along metal surface. Decrease in gamma energy was examined by reducing the number of gamma rays in the complete absorption peak to the Compton length level and increasing the Compton effect. The number of gamma rays in the complete absorption peak decreases by 3.5 times in the angle range under study
Noise source localization on tyres using an inverse boundary element method
DEFF Research Database (Denmark)
Schuhmacher, Andreas; Saemann, E-U; Hald, J
1998-01-01
A dominating part of tyre noise is radiated from a region close to the tyre/road contact patch, where it is very difficult to measure both the tyre vibration and the acoustic near field. The approach taken in the present paper is to model the tyre and road surfaces with a Boundary Element Model...... (BEM), with unknown node vibration data on the tyre surface. The BEM model is used to calculate a set of transfer functions from the node vibrations to the sound pressure at a set of microphone positions around the tyre. By approximate inversion of the matrix of transfer functions, the surface...... from tyre noise measurements will be presented at the conference....
Douk, Hamid Shafaei; Aghamiri, Mahmoud Reza; Ghorbani, Mahdi; Farhood, Bagher; Bakhshandeh, Mohsen; Hemmati, Hamid Reza
2018-01-01
The aim of this study is to evaluate the accuracy of the inverse square law (ISL) method for determining location of virtual electron source ( S Vir ) in Siemens Primus linac. So far, different experimental methods have presented for determining virtual and effective electron source location such as Full Width at Half Maximum (FWHM), Multiple Coulomb Scattering (MCS), and Multi Pinhole Camera (MPC) and Inverse Square Law (ISL) methods. Among these methods, Inverse Square Law is the most common used method. Firstly, Siemens Primus linac was simulated using MCNPX Monte Carlo code. Then, by using dose profiles obtained from the Monte Carlo simulations, the location of S Vir was calculated for 5, 7, 8, 10, 12 and 14 MeV electron energies and 10 cm × 10 cm, 15 cm × 15 cm, 20 cm × 20 cm and 25 cm × 25 cm field sizes. Additionally, the location of S Vir was obtained by the ISL method for the mentioned electron energies and field sizes. Finally, the values obtained by the ISL method were compared to the values resulted from Monte Carlo simulation. The findings indicate that the calculated S Vir values depend on beam energy and field size. For a specific energy, with increase of field size, the distance of S Vir increases for most cases. Furthermore, for a special applicator, with increase of electron energy, the distance of S Vir increases for most cases. The variation of S Vir values versus change of field size in a certain energy is more than the variation of S Vir values versus change of electron energy in a certain field size. According to the results, it is concluded that the ISL method can be considered as a good method for calculation of S Vir location in higher electron energies (14 MeV).
A comparison of two methods for earthquake source inversion using strong motion seismograms
Directory of Open Access Journals (Sweden)
G. C. Beroza
1994-06-01
Full Text Available In this paper we compare two time-domain inversion methods that have been widely applied to the problem of modeling earthquake rupture using strong-motion seismograms. In the multi-window method, each point on the fault is allowed to rupture multiple times. This allows flexibility in the rupture time and hence the rupture velocity. Variations in the slip-velocity function are accommodated by variations in the slip amplitude in each time-window. The single-window method assumes that each point on the fault ruptures only once, when the rupture front passes. Variations in slip amplitude are allowed and variations in rupture velocity are accommodated by allowing the rupture time to vary. Because the multi-window method allows greater flexibility, it has the potential to describe a wider range of faulting behavior; however, with this increased flexibility comes an increase in the degrees of freedom and the solutions are comparatively less stable. We demonstrate this effect using synthetic data for a test model of the Mw 7.3 1992 Landers, California earthquake, and then apply both inversion methods to the actual recordings. The two approaches yield similar fits to the strong-motion data with different seismic moments indicating that the moment is not well constrained by strong-motion data alone. The slip amplitude distribution is similar using either approach, but important differences exist in the rupture propagation models. The single-window method does a better job of recovering the true seismic moment and the average rupture velocity. The multi-window method is preferable when rise time is strongly variable, but tends to overestimate the seismic moment. Both methods work well when the rise time is constant or short compared to the periods modeled. Neither approach can recover the temporal details of rupture propagation unless the distribution of slip amplitude is constrained by independent data.
Saunier, Olivier; Mathieu, Anne; Didier, Damien; Tombette, Marilyne; Quélo, Denis; Winiarek, Victor; Bocquet, Marc
2013-04-01
The Chernobyl nuclear accident and more recently the Fukushima accident highlighted that the largest source of error on consequences assessment is the source term estimation including the time evolution of the release rate and its distribution between radioisotopes. Inverse modelling methods have proved to be efficient to assess the source term due to accidental situation (Gudiksen, 1989, Krysta and Bocquet, 2007, Stohl et al 2011, Winiarek et al 2012). These methods combine environmental measurements and atmospheric dispersion models. They have been recently applied to the Fukushima accident. Most existing approaches are designed to use air sampling measurements (Winiarek et al, 2012) and some of them use also deposition measurements (Stohl et al, 2012, Winiarek et al, 2013). During the Fukushima accident, such measurements are far less numerous and not as well distributed within Japan than the dose rate measurements. To efficiently document the evolution of the contamination, gamma dose rate measurements were numerous, well distributed within Japan and they offered a high temporal frequency. However, dose rate data are not as easy to use as air sampling measurements and until now they were not used in inverse modelling approach. Indeed, dose rate data results from all the gamma emitters present in the ground and in the atmosphere in the vicinity of the receptor. They do not allow one to determine the isotopic composition or to distinguish the plume contribution from wet deposition. The presented approach proposes a way to use dose rate measurement in inverse modeling approach without the need of a-priori information on emissions. The method proved to be efficient and reliable when applied on the Fukushima accident. The emissions for the 8 main isotopes Xe-133, Cs-134, Cs-136, Cs-137, Ba-137m, I-131, I-132 and Te-132 have been assessed. The Daiichi power plant events (such as ventings, explosions…) known to have caused atmospheric releases are well identified in
Directory of Open Access Journals (Sweden)
B. de Foy
2012-10-01
Full Text Available Gaseous elemental mercury is a global pollutant that can lead to serious health concerns via deposition to the biosphere and bio-accumulation in the food chain. Hourly measurements between June 2004 and May 2005 in an urban site (Milwaukee, WI show elevated levels of mercury in the atmosphere with numerous short-lived peaks as well as longer-lived episodes. The measurements are analyzed with an inverse model to obtain information about mercury emissions. The model is based on high resolution meteorological simulations (WRF, hourly back-trajectories (WRF-FLEXPART and a chemical transport model (CAMx. The hybrid formulation combining back-trajectories and Eulerian simulations is used to identify potential source regions as well as the impacts of forest fires and lake surface emissions. Uncertainty bounds are estimated using a bootstrap method on the inversions. Comparison with the US Environmental Protection Agency's National Emission Inventory (NEI and Toxic Release Inventory (TRI shows that emissions from coal-fired power plants are properly characterized, but emissions from local urban sources, waste incineration and metal processing could be significantly under-estimated. Emissions from the lake surface and from forest fires were found to have significant impacts on mercury levels in Milwaukee, and to be underestimated by a factor of two or more.
DEFF Research Database (Denmark)
Karamehmedovic, Mirza; Kirkeby, Adrian; Knudsen, Kim
2018-01-01
setting: From measurements made at a finite set of frequencies we uniquely determine and reconstruct sources in a subspace spanned by finitely many Fourier-Bessel functions. Further, we obtain a constructive criterion for identifying a minimal set of measurement frequencies sufficient for reconstruction......, and under an additional, mild assumption, the reconstruction method is shown to be stable." Our analysis is based on a singular value decomposition of the source-to-measurement forward operators and the distribution of positive zeros of the Bessel functions of the first kind. The reconstruction method...
International Nuclear Information System (INIS)
Alberico, W.M.; Molinari, A.
1982-01-01
In this paper we briefly review the formalism of the nuclear Compton scattering in the frame of the low-energy theorems (LET). We treat the resonant terms of the amplitude, having collective intermediate nuclear states, as a superposition of Lorentz lines with energy, width and strength fixed by the photo-absorption experiments. The gauge terms are evaluated starting from a simple, but realistic, nuclear Hamiltonian. Dynamical nucleon-nucleon correlations are consistently taken into account, beyond those imposed by the Pauli principle. The comparison of the theoretical predictions with the data of elastic diffusion of photons from 208 Pb shows that LET are insufficient to account for the experiment. (orig.)
National Research Council Canada - National Science Library
Steenman, Daryl
1999-01-01
.... In the far-field of these tested objects, actual sources of high reflectivity or "Hot Spots" on the tested objects can be isolated to within only one half the wavelength of the electromagnetic wave used for testing...
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Šindelářová, Kateřina; Hýža, Miroslav; Stohl, Andreas
2017-10-01
In the fall of 2011, iodine-131 (131I) was detected at several radionuclide monitoring stations in central Europe. After investigation, the International Atomic Energy Agency (IAEA) was informed by Hungarian authorities that 131I was released from the Institute of Isotopes Ltd. in Budapest, Hungary. It was reported that a total activity of 342 GBq of 131I was emitted between 8 September and 16 November 2011. In this study, we use the ambient concentration measurements of 131I to determine the location of the release as well as its magnitude and temporal variation. As the location of the release and an estimate of the source strength became eventually known, this accident represents a realistic test case for inversion models. For our source reconstruction, we use no prior knowledge. Instead, we estimate the source location and emission variation using only the available 131I measurements. Subsequently, we use the partial information about the source term available from the Hungarian authorities for validation of our results. For the source determination, we first perform backward runs of atmospheric transport models and obtain source-receptor sensitivity (SRS) matrices for each grid cell of our study domain. We use two dispersion models, FLEXPART and Hysplit, driven with meteorological analysis data from the global forecast system (GFS) and from European Centre for Medium-range Weather Forecasts (ECMWF) weather forecast models. Second, we use a recently developed inverse method, least-squares with adaptive prior covariance (LS-APC), to determine the 131I emissions and their temporal variation from the measurements and computed SRS matrices. For each grid cell of our simulation domain, we evaluate the probability that the release was generated in that cell using Bayesian model selection. The model selection procedure also provides information about the most suitable dispersion model for the source term reconstruction. Third, we select the most probable location of
Directory of Open Access Journals (Sweden)
O. Tichý
2017-10-01
Full Text Available In the fall of 2011, iodine-131 (131I was detected at several radionuclide monitoring stations in central Europe. After investigation, the International Atomic Energy Agency (IAEA was informed by Hungarian authorities that 131I was released from the Institute of Isotopes Ltd. in Budapest, Hungary. It was reported that a total activity of 342 GBq of 131I was emitted between 8 September and 16 November 2011. In this study, we use the ambient concentration measurements of 131I to determine the location of the release as well as its magnitude and temporal variation. As the location of the release and an estimate of the source strength became eventually known, this accident represents a realistic test case for inversion models. For our source reconstruction, we use no prior knowledge. Instead, we estimate the source location and emission variation using only the available 131I measurements. Subsequently, we use the partial information about the source term available from the Hungarian authorities for validation of our results. For the source determination, we first perform backward runs of atmospheric transport models and obtain source-receptor sensitivity (SRS matrices for each grid cell of our study domain. We use two dispersion models, FLEXPART and Hysplit, driven with meteorological analysis data from the global forecast system (GFS and from European Centre for Medium-range Weather Forecasts (ECMWF weather forecast models. Second, we use a recently developed inverse method, least-squares with adaptive prior covariance (LS-APC, to determine the 131I emissions and their temporal variation from the measurements and computed SRS matrices. For each grid cell of our simulation domain, we evaluate the probability that the release was generated in that cell using Bayesian model selection. The model selection procedure also provides information about the most suitable dispersion model for the source term reconstruction. Third, we select the most
Aur, K. A.; Poppeliers, C.; Preston, L. A.
2017-12-01
The Source Physics Experiment (SPE) consists of a series of underground chemical explosions at the Nevada National Security Site (NNSS) designed to gain an improved understanding of the generation and propagation of physical signals in the near and far field. Characterizing the acoustic and infrasound source mechanism from underground explosions is of great importance to underground explosion monitoring. To this end we perform full waveform source inversion of infrasound data collected from the SPE-6 experiment at distances from 300 m to 6 km and frequencies up to 20 Hz. Our method requires estimating the state of the atmosphere at the time of each experiment, computing Green's functions through these atmospheric models, and subsequently inverting the observed data in the frequency domain to obtain a source time function. To estimate the state of the atmosphere at the time of the experiment, we utilize the Weather Research and Forecasting - Data Assimilation (WRF-DA) modeling system to derive a unified atmospheric state model by combining Global Energy and Water Cycle Experiment (GEWEX) Continental-scale International Project (GCIP) data and locally obtained sonde and surface weather observations collected at the time of the experiment. We synthesize Green's functions through these atmospheric models using Sandia's moving media acoustic propagation simulation suite (TDAAPS). These models include 3-D variations in topography, temperature, pressure, and wind. We compare inversion results using the atmospheric models derived from the unified weather models versus previous modeling results and discuss how these differences affect computed source waveforms with respect to observed waveforms at various distances. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear
Inversion of GPS-measured coseismic displacements for source parameters of Taiwan earthquake
Lin, J. T.; Chang, W. L.; Hung, H. K.; Yu, W. C.
2016-12-01
We performed a method of determining earthquake location, focal mechanism, and centroid moment tensor by coseismic surface displacements from daily and high-rate GPS measurements. Unlike commonly used dislocation model where fault geometry is calculated nonlinearly, our method makes a point source approach to evaluate these parameters in a solid and efficient way without a priori fault information and can thus provide constrains to subsequent finite source modeling of fault slip. In this study, we focus on the resolving ability of GPS data for moderate (Mw=6.0 7.0) earthquakes in Taiwan, and four earthquakes were investigated in detail: the March 27 2013 Nantou (Mw=6.0), the June 2 2013 Nantou (Mw=6.3) , the October 31 2013 Ruisui (Mw=6.3), and the March 31 2002 Hualien (ML=6.8) earthquakes. All these events were recorded by the Taiwan continuous GPS network with data sampling rates of 30-second and 1 Hz, where the Mw6.3 Ruisui earthquake was additionally recorded by another local GPS network with a sampling rate of 20 Hz. Our inverted focal mechanisms of all these earthquakes are consistent with the results of GCMT and USGS that evaluates source parameters by dynamic information from seismic waves. We also successfully resolved source parameters of the Mw6.3 Ruisui earthquake within only 10 seconds following the earthquake occurrence, demonstrating the potential of high-rate GPS data on earthquake early warning and real-time determination of earthquake source parameters.
Díaz-Mojica, J. J.; Cruz-Atienza, V. M.; Madariaga, R.; Singh, S. K.; Iglesias, A.
2013-05-01
We introduce a novel approach for imaging the earthquakes dynamics from ground motion records based on a parallel genetic algorithm (GA). The method follows the elliptical dynamic-rupture-patch approach introduced by Di Carli et al. (2010) and has been carefully verified through different numerical tests (Díaz-Mojica et al., 2012). Apart from the five model parameters defining the patch geometry, our dynamic source description has four more parameters: the stress drop inside the nucleation and the elliptical patches; and two friction parameters, the slip weakening distance and the change of the friction coefficient. These parameters are constant within the rupture surface. The forward dynamic source problem, involved in the GA inverse method, uses a highly accurate computational solver for the problem, namely the staggered-grid split-node. The synthetic inversion presented here shows that the source model parameterization is suitable for the GA, and that short-scale source dynamic features are well resolved in spite of low-pass filtering of the data for periods comparable to the source duration. Since there is always uncertainty in the propagation medium as well as in the source location and the focal mechanisms, we have introduced a statistical approach to generate a set of solution models so that the envelope of the corresponding synthetic waveforms explains as much as possible the observed data. We applied the method to the 2012 Mw6.5 intraslab Zumpango, Mexico earthquake and determined several fundamental source parameters that are in accordance with different and completely independent estimates for Mexican and worldwide earthquakes. Our weighted-average final model satisfactorily explains eastward rupture directivity observed in the recorded data. Some parameters found for the Zumpango earthquake are: Δτ = 30.2+/-6.2 MPa, Er = 0.68+/-0.36x10^15 J, G = 1.74+/-0.44x10^15 J, η = 0.27+/-0.11, Vr/Vs = 0.52+/-0.09 and Mw = 6.64+/-0.07; for the stress drop
Laser-electron Compton interaction in plasma channels
International Nuclear Information System (INIS)
Pogorelsky, I.V.; Ben-Zvi, I.; Hirose, T.
1998-10-01
A concept of high intensity femtosecond laser synchrotron source (LSS) is based on Compton backscattering of focused electron and laser beams. The short Rayleigh length of the focused laser beam limits the length of interaction to a few picoseconds. However, the technology of the high repetition rate high-average power picosecond lasers required for high put through LSS applications is not developed yet. Another problem associated with the picosecond laser pulses is undesirable nonlinear effects occurring when the laser photons are concentrated in a short time interval. To avoid the nonlinear Compton scattering, the laser beam has to be split, and the required hard radiation flux is accumulated over a number of consecutive interactions that complicates the LSS design. In order to relieve the technological constraints and achieve a practically feasible high-power laser synchrotron source, the authors propose to confine the laser-electron interaction region in the extended plasma channel. This approach permits to use nanosecond laser pulses instead of the picosecond pulses. That helps to avoid the nonlinear Compton scattering regime and allows to utilize already existing technology of the high-repetition rate TEA CO 2 lasers operating at the atmospheric pressure. They demonstrate the advantages of the channeled LSS approach by the example of the prospective polarized positron source for Japan Linear Collider
Spectral Evolution of Synchrotron and Inverse Compton Emission in ...
Indian Academy of Sciences (India)
emission peaks in the optical band (e.g., Nieppola et al. 2006). In order to under- stand the evolution of synchrotron and IC spectra of BL Lac objects, the X-ray spectral analysis with XMM–Newton X-ray observations of PKS 2155–304 and. S5 0716+7145 (see Zhang 2008, 2010 for details) was performed. Here, the results.
Relativistic inverse Compton scattering of photons from the early universe.
Malu, Siddharth; Datta, Abhirup; Colafrancesco, Sergio; Marchegiani, Paolo; Subrahmanyan, Ravi; Narasimha, D; Wieringa, Mark H
2017-12-05
Electrons at relativistic speeds, diffusing in magnetic fields, cause copious emission at radio frequencies in both clusters of galaxies and radio galaxies through non-thermal radiation emission called synchrotron. However, the total power radiated through this mechanism is ill constrained, as the lower limit of the electron energy distribution, or low-energy cutoffs, for radio emission in galaxy clusters and radio galaxies, have not yet been determined. This lower limit, parametrized by the lower limit of the electron momentum - p min - is critical for estimating the total energetics of non-thermal electrons produced by cluster mergers or injected by radio galaxy jets, which impacts the formation of large-scale structure in the universe, as well as the evolution of local structures inside galaxy clusters. The total pressure due to the relativistic, non-thermal population of electrons can be measured using the Sunyaev-Zel'dovich Effect, and is critically dependent on p min , making the measurement of this non-thermal pressure a promising technique to estimate the electron low-energy cutoff. We present here the first unambiguous detection of this Sunyaev-Zel'dovich Effect for a non-thermal population of electrons in a radio galaxy jet/lobe, located at a significant distance away from the center of the Bullet cluster of galaxies.
X-ray Compton line scan tomography
Energy Technology Data Exchange (ETDEWEB)
Kupsch, Andreas; Lange, Axel; Jaenisch, Gerd-Ruediger [Bundesanstalt fuer Materialforschung und -pruefung (BAM), Berlin (Germany). Fachgruppe 8.5 - Mikro-ZfP; Hentschel, Manfred P. [Technische Univ. Berlin (Germany); Kardjilov, Nikolay; Markoetter, Henning; Hilger, Andre; Manke, Ingo [Helmholtz-Zentrum Berlin (HZB) (Germany); Toetzke, Christian [Potsdam Univ. (Germany)
2015-07-01
The potentials of incoherent X-ray scattering (Compton) computed tomography (CT) are investigated. The imaging of materials of very different atomic number or density at once is generally a perpetual challenge for X-ray tomography or radiography. In a basic laboratory set-up for simultaneous perpendicular Compton scattering and direct beam attenuation tomography are conducted by single channel photon counting line scans. This results in asymmetric distortions of the projection profiles of the scattering CT data set. In a first approach, corrections of Compton scattering data by taking advantage of rotational symmetry yield tomograms without major geometric artefacts. A cylindrical sample composed of PE, PA, PVC, glass and wood demonstrates similar Compton contrast for all the substances, while the conventional absorption tomogram only reveals the two high order materials. Comparison to neutron tomography reveals astonishing similarities except for the glass component (without hydrogen). Therefore, Compton CT offers the potential to replace neutron tomography, which requires much more efforts.
Bauwens, Maite; Stavrakou, Trissevgeni; Müller, Jean François; De Smedt, Isabelle; Van Roozendael, Michel; Van Der Werf, Guido R.; Wiedinmyer, Christine; Kaiser, Johannes W.; Sindelarova, Katerina; Guenther, Alex
2016-01-01
As formaldehyde (HCHO) is a high-yield product in the oxidation of most volatile organic compounds (VOCs) emitted by fires, vegetation, and anthropogenic activities, satellite observations of HCHO are well-suited to inform us on the spatial and temporal variability of the underlying VOC sources. The
Spin and orbital magnetisation densities determined by Compton scattering of photons
International Nuclear Information System (INIS)
Collins, S.P.; Laundy, D.; Cooper, M.J.; Lovesey, S.W.; Uppsala Univ.
1990-03-01
Compton scattering of a circularly polarized photon beam is shown to provide direct information on orbital and spin magnetisation densities. Experiments are reported which demonstrate the feasibility of the method by correctly predicting the ratio of spin and orbital magnetisation components in iron and cobalt. A partially polarised beam of 45 keV photons from the Daresbury Synchrotron Radiation Source produces charge-magnetic interference scattering which is measured by a field-difference method. Theory shows that the interference cross section contains the Compton profile of polarised electrons modulated by a structure factor which is a weighted sum of spin and orbital magnetisations. In particular, the scattering geometry for which the structure factor vanishes yields a unique value for the ratio of the magnetisation densities. Compton scattering, being an incoherent process, provides data on total unit cell magnetisations which can be directly compared with bulk data. In this respect, Compton scattering complements magnetic neutron and photon Bragg diffraction. (author)
Nested atmospheric inversion for the terrestrial carbon sources and sinks in China
Directory of Open Access Journals (Sweden)
F. Jiang
2013-08-01
Full Text Available In this study, we establish a nested atmospheric inversion system with a focus on China using the Bayesian method. The global surface is separated into 43 regions based on the 22 TransCom large regions, with 13 small regions in China. Monthly CO2 concentrations from 130 GlobalView sites and 3 additional China sites are used in this system. The core component of this system is an atmospheric transport matrix, which is created using the TM5 model with a horizontal resolution of 3° × 2°. The net carbon fluxes over the 43 global land and ocean regions are inverted for the period from 2002 to 2008. The inverted global terrestrial carbon sinks mainly occur in boreal Asia, South and Southeast Asia, eastern America and southern South America. Most China areas appear to be carbon sinks, with strongest carbon sinks located in Northeast China. From 2002 to 2008, the global terrestrial carbon sink has an increasing trend, with the lowest carbon sink in 2002. The inter-annual variation (IAV of the land sinks shows remarkable correlation with the El Niño Southern Oscillation (ENSO. The terrestrial carbon sinks in China also show an increasing trend. However, the IAV in China is not the same as that of the globe. There is relatively stronger land sink in 2002, lowest sink in 2006, and strongest sink in 2007 in China. This IAV could be reasonably explained with the IAVs of temperature and precipitation in China. The mean global and China terrestrial carbon sinks over the period 2002–2008 are −3.20 ± 0.63 and −0.28 ± 0.18 PgC yr−1, respectively. Considering the carbon emissions in the form of reactive biogenic volatile organic compounds (BVOCs and from the import of wood and food, we further estimate that China's land sink is about −0.31 PgC yr−1.
Arnold, Tim; Manning, Alistair; Li, Shanlan; Kim, Jooil; Park, Sunyoung; Muhle, Jens; Weiss, Ray
2017-04-01
The fluorinated species carbon tetrafluoride (CF4; PFC-14), nitrogen trifluoride (NF3) and trifluoromethane (CHF3; HFC-23) are potent greenhouse gases with 100-year global warming potentials of 6,630, 16,100 and 12,400, respectively. Unlike the majority of CFC-replacements that are emitted from fugitive and mobile emission sources, these gases are mostly emitted from large single point sources - semiconductor manufacturing facilities (all three), aluminium smelting plants (CF4) and chlorodifluoromethane (HCFC-22) factories (HFC-23). In this work we show that atmospheric measurements can serve as a basis to calculate emissions of these gases and to highlight emission 'hotspots'. We use measurements from one Advanced Global Atmospheric Gases Experiment (AGAGE) long term monitoring sites at Gosan on Jeju Island in the Republic of Korea. This site measures CF4, NF3 and HFC-23 alongside a suite of greenhouse and stratospheric ozone depleting gases every two hours using automated in situ gas-chromatography mass-spectrometry instrumentation. We couple each measurement to an analysis of air history using the regional atmospheric transport model NAME (Numerical Atmospheric dispersion Modelling Environment) driven by 3D meteorology from the Met Office's Unified Model, and use a Bayesian inverse method (InTEM - Inversion Technique for Emission Modelling) to calculate yearly emission changes over seven years between 2008 and 2015. We show that our 'top-down' emission estimates for NF3 and CF4 are significantly larger than 'bottom-up' estimates in the EDGAR emissions inventory (edgar.jrc.ec.europa.eu). For example we calculate South Korean emissions of CF4 in 2010 to be 0.29±0.04 Gg/yr, which is significantly larger than the Edgar prior emissions of 0.07 Gg/yr. Further, inversions for several separate years indicate that emission hotspots can be found without prior spatial information. At present these gases make a small contribution to global radiative forcing, however, given
Recent results from a Si/CdTe semiconductor Compton telescope
International Nuclear Information System (INIS)
Tanaka, Takaaki; Watanabe, Shin; Takeda, Shin'ichiro; Oonuki, Kousuke; Mitani, Takefumi; Nakazawa, Kazuhiro; Takashima, Takeshi; Takahashi, Tadayuki; Tajima, Hiroyasu; Sawamoto, Naoyuki; Fukazawa, Yasushi; Nomachi, Masaharu
2006-01-01
We are developing a Compton telescope based on high-resolution Si and CdTe detectors for astrophysical observations in sub-MeV/MeV gamma-ray region. Recently, we constructed a prototype Compton telescope which consists of six layers of double-sided Si strip detectors (DSSDs) and CdTe pixel detectors to demonstrate the basic performance of this new technology. By irradiating the detector with gamma rays from radio isotope sources, we have succeeded in Compton reconstruction of images and spectra. The obtained angular resolution is 3.9 o (FWHM) at 511keV, and the energy resolution is 14keV (FWHM) at the same energy. In addition to the conventional Compton reconstruction, i.e., drawing cones in the sky, we also demonstrated a full reconstruction by tracking Compton recoil electrons using the signals detected in successive Si layers. By irradiating 137 Cs source, we successfully obtained an image and a spectrum of 662keV line emission with this method. As a next step, development of larger DSSDs with a size of 4cmx4cm is under way to improve the effective area of the Compton telescope. We are also developing a new low-noise analog ASIC to handle the increasing number of channels. Initial results from these two new technologies are presented in this paper as well
Mount St. Helens: Controlled-source audio-frequency magnetotelluric (CSAMT) data and inversions
Wynn, Jeff; Pierce, Herbert A.
2015-01-01
This report describes a series of geoelectrical soundings carried out on and near Mount St. Helens volcano, Washington, in 2010–2011. These soundings used a controlled-source audio-frequency magnetotelluric (CSAMT) approach (Zonge and Hughes, 1991; Simpson and Bahr, 2005). We chose CSAMT for logistical reasons: It can be deployed by helicopter, has an effective depth of penetration of as much as 1 kilometer, and requires less wire than a Schlumberger sounding.
A probabilistic approach for the estimation of earthquake source parameters from spectral inversion
Supino, M.; Festa, G.; Zollo, A.
2017-12-01
The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to
International Nuclear Information System (INIS)
Li Qi; Zhang Dajun; Chen Dengyuan
2010-01-01
N-soliton solutions of the hierarchy of non-isospectral mKdV equation with self-consistent sources and the hierarchy of non-isospectral sine-Gordon equation with self-consistent sources are obtained via the inverse scattering transform. (general)
International Nuclear Information System (INIS)
Antoniassi, M.; Conceição, A.L.C.; Poletti, M.E.
2012-01-01
Electron densities of 33 samples of normal (adipose and fibroglangular) and neoplastic (benign and malignant) human breast tissues were determined through Compton scattering data using a monochromatic synchrotron radiation source and an energy dispersive detector. The area of Compton peaks was used to determine the electron densities of the samples. Adipose tissue exhibits the lowest values of electron density whereas malignant tissue the highest. The relationship with their histology was discussed. Comparison with previous results showed differences smaller than 4%. - Highlights: ► Electron density of normal and neoplastic breast tissues was measured using Compton scattering. ► Monochromatic synchrotron radiation was used to obtain the Compton scattering data. ► The area of Compton peaks was used to determine the electron densities of samples. ► Adipose tissue shows the lowest electron density values whereas the malignant tissue the highest. ► Comparison with previous results showed differences smaller than 4%.
[Inversion of results upon using an "integral index" from an ostensibly authoritative source].
Galkin, A V
2012-01-01
General results and conclusions of Nesterenko et al. [Biofizika 57(4)] have been distorted by uncritically applying an invalid "unifying formula" from a recent monograph [ISBN 978-5-02-035593-4]. Readers are asked to ignore the Russian publication but take the fully revised English version [Biophysics 57(4)] or contact the authors (tv-nesterenko@mail.ru, ubflab@ibp.ru). Here I briefly show the basic defects in the quasi-quantitative means of data analysis offered in that book, and mention some more problems regarding erroneous information in ostensibly authoritative sources.
Gu, C.; Toksoz, M. N.; Marzouk, Y.; Al-Enezi, A.; Al-Jeri, F.; Buyukozturk, O.
2016-12-01
The increasing seismic activity in the regions of oil/gas fields due to fluid injection/extraction and hydraulic fracturing has drawn new attention in both academia and industry. Source mechanism and triggering stress of these induced earthquakes are of great importance for understanding the physics of the seismic processes in reservoirs, and predicting ground motion in the vicinity of oil/gas fields. The induced seismicity data in our study are from Kuwait National Seismic Network (KNSN). Historically, Kuwait has low local seismicity; however, in recent years the KNSN has monitored more and more local earthquakes. Since 1997, the KNSN has recorded more than 1000 earthquakes (Mw Institutions for Seismology (IRIS) and KNSN, and widely felt by people in Kuwait. These earthquakes happen repeatedly in the same locations close to the oil/gas fields in Kuwait (see the uploaded image). The earthquakes are generally small (Mw stress of these earthquakes was calculated based on the source mechanisms results. In addition, we modeled the ground motion in Kuwait due to these local earthquakes. Our results show that most likely these local earthquakes occurred on pre-existing faults and were triggered by oil field activities. These events are generally smaller than Mw 5; however, these events, occurring in the reservoirs, are very shallow with focal depths less than about 4 km. As a result, in Kuwait, where oil fields are close to populated areas, these induced earthquakes could produce ground accelerations high enough to cause damage to local structures without using seismic design criteria.
Quantitative Compton suppression spectrometry at elevated counting rates
International Nuclear Information System (INIS)
Westphal, G.P.; Joestl, K.; Schroeder, P.; Lauster, R.; Hausch, E.
1999-01-01
For quantitative Compton suppression spectrometry the decrease of coincidence efficiency with counting rate should be made negligible to avoid a virtual increase of relative peak areas of coincident isomeric transitions with counting rate. To that aim, a separate amplifier and discriminator has been used for each of the eight segments of the active shield of a new well-type Compton suppression spectrometer, together with an optimized, minimum dead-time design of the anticoincidence logic circuitry. Chance coincidence losses in the Compton suppression spectrometer are corrected instrumentally by comparing the chance coincidence rate to the counting rate of the germanium detector in a pulse-counting Busy circuit (G.P. Westphal, J. Rad. Chem. 179 (1994) 55) which is combined with the spectrometer's LFC counting loss correction system. The normally not observable chance coincidence rate is reconstructed from the rates of germanium detector and scintillation detector in an auxiliary coincidence unit, after the destruction of true coincidence by delaying one of the coincidence partners. Quantitative system response has been tested in two-source measurements with a fixed reference source of 60 Co of 14 kc/s, and various samples of 137 Cs, up to aggregate counting rates of 180 kc/s for the well-type detector, and more than 1400 kc/s for the BGO shield. In these measurements, the net peak areas of the 1173.3 keV line of 60 Co remained constant at typical values of 37 000 with and 95 000 without Compton suppression, with maximum deviations from the average of less than 1.5%
A simple scanner for Compton tomography
Cesareo, R; Brunetti, A; Golosio, B; Castellano, A
2002-01-01
A first generation CT-scanner was designed and constructed to carry out Compton images. This CT-scanner is composed of a 80 kV, 5 mA X-ray tube and a NaI(Tl) X-ray detector; the tube is strongly collimated, generating a X-ray beam of 2 mm diameter, whilst the detector is not collimated to collect Compton photons from the whole irradiated cylinder. The performances of the equipment were tested contemporaneous transmission and Compton images.
Source inversion of the 1570 Ferrara earthquake and definitive diversion of the Po River (Italy)
Sirovich, L.; Pettenati, F.
2015-08-01
An 11-parameter, kinematic-function (KF) model was used to retrieve the approximate geometrical and kinematic characteristics of the fault source of the 1570 Mw 5.8 Ferrara earthquake in the Po Plain, including the double-couple orientation (strike angle 127 ± 16°, dip 28 ± 7°, and rake 77 ± 16°). These results are compatible with either the outermost thrust fronts of the northern Apennines, which are buried beneath the Po Plain's alluvial deposits, or the blind crustal-scale thrust. The 1570 event developed to the ENE of the two main shocks on 20 May 2012 (M 6.1) and 29 May 2012 (M 5.9). The three earthquakes had similar kinematics and are found 20-30 km from each other en echelon in the buried chain. Geomorphological and historical evidence exist which suggest the following: (i) the long-lasting uplift of the buried Apenninic front shifted the central part of the course of the Po River approximately 20 km northward in historical times and (ii) the 1570 earthquake marked the definitive diversion of the final part of the Po River away from Ferrara and the closure of the Po delta 40 km south of its present position.
Monsalve-Jaramillo, Hugo; Valencia-Mina, William; Cano-Saldaña, Leonardo; Vargas, Carlos A.
2018-05-01
Source parameters of four earthquakes located within the Wadati-Benioff zone of the Nazca plate subducting beneath the South American plate in Colombia were determined. The seismic moments for these events were recalculated and their approximate equivalent rupture area, slip distribution and stress drop were estimated. The source parameters for these earthquakes were obtained by deconvolving multiple events through teleseismic analysis of body waves recorded in long period stations and with simultaneous inversion of P and SH waves. The calculated source time functions for these events showed different stages that suggest that these earthquakes can reasonably be thought of being composed of two subevents. Even though two of the overall focal mechanisms obtained yielded similar results to those reported by the CMT catalogue, the two other mechanisms showed a clear difference compared to those officially reported. Despite this, it appropriate to mention that the mechanisms inverted in this work agree well with the expected orientation of faulting at that depth as well as with the wave forms they are expected to produce. In some of the solutions achieved, one of the two subevents exhibited a focal mechanism considerably different from the total earthquake mechanism; this could be interpreted as the result of a slight deviation from the overall motion due the complex stress field as well as the possibility of a combination of different sources of energy release analogous to the ones that may occur in deeper earthquakes. In those cases, the subevents with very different focal mechanism compared to the total earthquake mechanism had little contribution to the final solution and thus little contribution to the total amount of energy released.
Directory of Open Access Journals (Sweden)
C. B. Alden
2018-03-01
Full Text Available Advances in natural gas extraction technology have led to increased activity in the production and transport sectors in the United States and, as a consequence, an increased need for reliable monitoring of methane leaks to the atmosphere. We present a statistical methodology in combination with an observing system for the detection and attribution of fugitive emissions of methane from distributed potential source location landscapes such as natural gas production sites. We measure long (> 500 m, integrated open-path concentrations of atmospheric methane using a dual frequency comb spectrometer and combine measurements with an atmospheric transport model to infer leak locations and strengths using a novel statistical method, the non-zero minimum bootstrap (NZMB. The new statistical method allows us to determine whether the empirical distribution of possible source strengths for a given location excludes zero. Using this information, we identify leaking source locations (i.e., natural gas wells through rejection of the null hypothesis that the source is not leaking. The method is tested with a series of synthetic data inversions with varying measurement density and varying levels of model–data mismatch. It is also tested with field observations of (1 a non-leaking source location and (2 a source location where a controlled emission of 3.1 × 10−5 kg s−1 of methane gas is released over a period of several hours. This series of synthetic data tests and outdoor field observations using a controlled methane release demonstrates the viability of the approach for the detection and sizing of very small leaks of methane across large distances (4+ km2 in synthetic tests. The field tests demonstrate the ability to attribute small atmospheric enhancements of 17 ppb to the emitting source location against a background of combined atmospheric (e.g., background methane variability and measurement uncertainty of 5 ppb (1σ, when
Alden, Caroline B.; Ghosh, Subhomoy; Coburn, Sean; Sweeney, Colm; Karion, Anna; Wright, Robert; Coddington, Ian; Rieker, Gregory B.; Prasad, Kuldeep
2018-03-01
Advances in natural gas extraction technology have led to increased activity in the production and transport sectors in the United States and, as a consequence, an increased need for reliable monitoring of methane leaks to the atmosphere. We present a statistical methodology in combination with an observing system for the detection and attribution of fugitive emissions of methane from distributed potential source location landscapes such as natural gas production sites. We measure long (> 500 m), integrated open-path concentrations of atmospheric methane using a dual frequency comb spectrometer and combine measurements with an atmospheric transport model to infer leak locations and strengths using a novel statistical method, the non-zero minimum bootstrap (NZMB). The new statistical method allows us to determine whether the empirical distribution of possible source strengths for a given location excludes zero. Using this information, we identify leaking source locations (i.e., natural gas wells) through rejection of the null hypothesis that the source is not leaking. The method is tested with a series of synthetic data inversions with varying measurement density and varying levels of model-data mismatch. It is also tested with field observations of (1) a non-leaking source location and (2) a source location where a controlled emission of 3.1 × 10-5 kg s-1 of methane gas is released over a period of several hours. This series of synthetic data tests and outdoor field observations using a controlled methane release demonstrates the viability of the approach for the detection and sizing of very small leaks of methane across large distances (4+ km2 in synthetic tests). The field tests demonstrate the ability to attribute small atmospheric enhancements of 17 ppb to the emitting source location against a background of combined atmospheric (e.g., background methane variability) and measurement uncertainty of 5 ppb (1σ), when measurements are averaged over 2 min. The
International Nuclear Information System (INIS)
Kalmykov, S Y; Shadwick, B A; Davoine, X; Ghebregziabher, I; Lehe, R; Lifschitz, A F
2016-01-01
Propagating a relativistically intense, negatively chirped laser pulse (the bandwidth >150 nm) in a plasma channel makes it possible to generate background-free, comb-like electron beams—sequences of synchronized bunches with a low phase-space volume and controlled energy spacing. The tail of the pulse, confined in the accelerator cavity (an electron density ‘bubble’), experiences periodic focusing, while the head, which is the most intense portion of the pulse, steadily self-guides. Oscillations of the cavity size cause periodic injection of electrons from the ambient plasma, creating an electron energy comb with the number of components, their mean energy, and energy spacing dependent on the channel radius and pulse length. These customizable electron beams enable the design of a tunable, all-optical source of pulsed, polychromatic γ-rays using the mechanism of inverse Thomson scattering, with up to ∼10 −5 conversion efficiency from the drive pulse in the electron accelerator to the γ-ray beam. Such a source may radiate ∼10 7 quasi-monochromatic photons per shot into a microsteradian-scale cone. The photon energy is distributed among several distinct bands, each having sub-30% energy spread, with a highest energy of 12.5 MeV. (paper)
Lin, Hongxiang; Azuma, Takashi; Qu, Xiaolei; Takagi, Shu
2017-03-01
In this work, we construct a multi-frequency accelerating strategy for the contrast source inversion (CSI) method using pulse data in the time domain. CSI is a frequency-domain inversion method for ultrasound waveform tomography that does not require the forward solver through the process of reconstruction. Several prior researches show that the CSI method has a good performance of convergence and accuracy in the low-center-frequency situation. In contrast, utilizing the high-center-frequency data leads to a high-resolution reconstruction but slow convergence on large numbers of grid. Our objective is to take full advantage of all low frequency components from pulse data with the high-center-frequency data measured by the diagnostic device. First we process the raw data in the frequency domain. Then multi-frequency accelerating strategy helps restart CSI in the current frequency using the last iteration result obtained from the lower frequency component. The merit of multi- frequency accelerating strategy is that computational burden decreases at the first few iterations. Because the low frequency component of dataset computes on the coarse grid with assuming a fixed number of points per wavelength. In the numerical test, the pulse data were generated by the K-wave simulator and have been processed to meet the computation of the CSI method. We investigate the performance of the multi-frequency and single-frequency reconstructions and conclude that the multi-frequency accelerating strategy significantly enhances the quality of the reconstructed image and simultaneously reduces the average computational time for any iteration step.
Adriano, Bruno; Fujii, Yushiro; Koshimura, Shunichi; Mas, Erick; Ruiz-Angulo, Angel; Estrada, Miguel
2018-01-01
On September 8, 2017 (UTC), a normal-fault earthquake occurred 87 km off the southeast coast of Mexico. This earthquake generated a tsunami that was recorded at coastal tide gauge and offshore buoy stations. First, we conducted a numerical tsunami simulation using a single-fault model to understand the tsunami characteristics near the rupture area, focusing on the nearby tide gauge stations. Second, the tsunami source of this event was estimated from inversion of tsunami waveforms recorded at six coastal stations and three buoys located in the deep ocean. Using the aftershock distribution within 1 day following the main shock, the fault plane orientation had a northeast dip direction (strike = 320°, dip = 77°, and rake =-92°). The results of the tsunami waveform inversion revealed that the fault area was 240 km × 90 km in size with most of the largest slip occurring on the middle and deepest segments of the fault. The maximum slip was 6.03 m from a 30 × 30 km2 segment that was 64.82 km deep at the center of the fault area. The estimated slip distribution showed that the main asperity was at the center of the fault area. The second asperity with an average slip of 5.5 m was found on the northwest-most segments. The estimated slip distribution yielded a seismic moment of 2.9 × 10^{21} Nm (Mw = 8.24), which was calculated assuming an average rigidity of 7× 10^{10} N/m2.
Deeply virtual Compton scattering. Results and future
International Nuclear Information System (INIS)
Nowak, W.D.
2005-03-01
Access to generalised parton distributions (GPDs) through deeply virtual Compton scattering (DVCS) is briefly described. Presently available experimental results on DVCS are summarized in conjunction with plans for future measurements. (orig.)
Computer control in a compton scattering spectrometer
International Nuclear Information System (INIS)
Cui Ningzhuo; Chen Tao; Gong Zhufang; Yang Baozhong; Mo Haiding; Hua Wei; Bian Zuhe
1995-01-01
The authors introduced the hardware and software of computer autocontrol of calibration and data acquisition in a Compton Scattering spectrometer which consists of a HPGe detector, Amplifiers and a MCA
Neutron Compton scattering from selectively deuterated acetanilide
Wanderlingh, U. N.; Fielding, A. L.; Middendorf, H. D.
With the aim of developing the application of neutron Compton scattering (NCS) to molecular systems of biophysical interest, we are using the Compton spectrometer EVS at ISIS to characterize the momentum distribution of protons in peptide groups. In this contribution we present NCS measurements of the recoil peak (Compton profile) due to the amide proton in otherwise fully deuterated acetanilide (ACN), a widely studied model system for H-bonding and energy transfer in biomolecules. We obtain values for the average width of the potential well of the amide proton and its mean kinetic energy. Deviations from the Gaussian form of the Compton profile, analyzed on the basis of an expansion due to Sears, provide data relating to the Laplacian of the proton potential.
Kharkov X-ray Generator Based On Compton Scattering
International Nuclear Information System (INIS)
Shcherbakov, A.; Zelinsky, A.; Mytsykov, A.; Gladkikh, P.; Karnaukhov, I.; Lapshin, V.; Telegin, Y.; Androsov, V.; Bulyak, E.; Botman, J.I.M.; Tatchyn, R.; Lebedev, A.
2004-01-01
Nowadays X-ray sources based on storage rings with low beam energy and Compton scattering of intense laser beams are under development in several laboratories. An international cooperative project of an advanced X-ray source of this type at the Kharkov Institute of Physics and Technology (KIPT) is described. The status of the project is reviewed. The design lattice of the storage ring and calculated X-ray beam parameters are presented. The results of numerical simulation carried out for proposed facility show a peak spectral X-ray intensity of about 1014 can be produced
Colour coherence in deep inelastic Compton scattering
Energy Technology Data Exchange (ETDEWEB)
Lebedev, A.I.; Vazdik, J.A. (Lebedev Physical Inst., Academy of Sciences, Moscow (USSR))
1992-01-01
MC simulation of Deep Inelastic Compton on proton - both QED and QCD - was performed on the basis of LUCIFER program for HERA energies. Charged hadron flow was calculated for string and independent fragmentation with different cuts on p{sub t} and x. It is shown that interjet colour coherence leads in the case of QCD Compton to the drag effects diminishing the hadron flow in the direction between quark jet and proton remnant jet. (orig.).
Colour coherence in deep inelastic Compton scattering
International Nuclear Information System (INIS)
Lebedev, A.I.; Vazdik, J.A.
1992-01-01
MC simulation of Deep Inelastic Compton on proton - both QED and QCD - was performed on the basis of LUCIFER program for HERA energies. Charged hadron flow was calculated for string and independent fragmentation with different cuts on p t and x. It is shown that interjet colour coherence leads in the case of QCD Compton to the drag effects diminishing the hadron flow in the direction between quark jet and proton remnant jet. (orig.)
Image reconstruction from limited angle Compton camera data
International Nuclear Information System (INIS)
Tomitani, T.; Hirasawa, M.
2002-01-01
The Compton camera is used for imaging the distributions of γ ray direction in a γ ray telescope for astrophysics and for imaging radioisotope distributions in nuclear medicine without the need for collimators. The integration of γ rays on a cone is measured with the camera, so that some sort of inversion method is needed. Parra found an analytical inversion algorithm based on spherical harmonics expansion of projection data. His algorithm is applicable to the full set of projection data. In this paper, six possible reconstruction algorithms that allow image reconstruction from projections with a finite range of scattering angles are investigated. Four algorithms have instability problems and two others are practical. However, the variance of the reconstructed image diverges in these two cases, so that window functions are introduced with which the variance becomes finite at a cost of spatial resolution. These two algorithms are compared in terms of variance. The algorithm based on the inversion of the summed back-projection is superior to the algorithm based on the inversion of the summed projection. (author)
Development of compact Compton camera for 3D image reconstruction of radioactive contamination
Sato, Y.; Terasaka, Y.; Ozawa, S.; Nakamura Miyamura, H.; Kaburagi, M.; Tanifuji, Y.; Kawabata, K.; Torii, T.
2017-11-01
The Fukushima Daiichi Nuclear Power Station (FDNPS), operated by Tokyo Electric Power Company Holdings, Inc., went into meltdown after the large tsunami caused by the Great East Japan Earthquake of March 11, 2011. Very large amounts of radionuclides were released from the damaged plant. Radiation distribution measurements inside FDNPS buildings are indispensable to execute decommissioning tasks in the reactor buildings. We have developed a compact Compton camera to measure the distribution of radioactive contamination inside the FDNPS buildings three-dimensionally (3D). The total weight of the Compton camera is lower than 1.0 kg. The gamma-ray sensor of the Compton camera employs Ce-doped GAGG (Gd3Al2Ga3O12) scintillators coupled with a multi-pixel photon counter. Angular correction of the detection efficiency of the Compton camera was conducted. Moreover, we developed a 3D back-projection method using the multi-angle data measured with the Compton camera. We successfully observed 3D radiation images resulting from the two 137Cs radioactive sources, and the image of the 9.2 MBq source appeared stronger than that of the 2.7 MBq source.
INJECTION EFFICIENCY IN COMPTON RING NESTOR
Directory of Open Access Journals (Sweden)
P. I. Gladkikh
2017-12-01
Full Text Available NESTOR is the hard X-ray source that is under commissioning at NSC KIPT. NESTOR based on the Compton scattering of laser photons on relativistic electrons. The structure of the facility can be represented as the following components: a linear accelerator, a transport channel, a storage ring, and a laser-optical system. Electrons are stored in the storage ring for energy of 40-200 MeV. Inevitable alignment errors of magnetic elements are strongly effect on the beam dynamics in the storage ring. These errors lead to a shift of the equilibrium orbit relative to the ideal one. Significant shift of the equilibrium orbit could lead to loss of the beam on physical apertures. Transverse sizes of electron and laser beams are only few tens of microns at the interaction point. The shift of electron beam at the interaction point could greatly complicate the operation adjustment of storage ring without sufficient beam position diagnostic system. This article presents the simulation results of the efficiency of electron beam accumulation in the NESTOR storage ring. Also, this article is devoted to electron beam dynamics due to alignment errors of magnetic element in the ring.
New Compton densitometer for measuring pulmonary edema
Energy Technology Data Exchange (ETDEWEB)
Loo, B.W.; Goulding, F.S.; Simon, D.S.
1985-10-01
Pulmonary edema is the pathological increase of extravascular lung water found most often in patients with congestive heart failure and other critically ill patients who suffer from intravenous fluid overload. A non-invasive lung density monitor that is accurate, easily portable, safe and inexpensive is needed for clinical evaluation of pulmonary edema. Other researchers who have employed Compton scattering techniques generally used systems of extended size and detectors with poor energy resolution. This has resulted in significant systematic biases from multiply-scattered photons and larger errors in counting statistics at a given radiation dose to the patient. We are proposing a patented approach in which only backscattered photons are measured with a high-resolution HPGe detector in a compact system geometry. By proper design and a unique data extraction scheme, effects of the variable chest wall on lung density measurements are minimized. Preliminary test results indicate that with a radioactive source of under 30 GBq, it should be possible to make an accurate lung density measurement in one minute, with a risk of radiation exposure to the patient a thousand times smaller than that from a typical chest x-ray. The ability to make safe, frequent lung density measurements could be very helpful for monitoring the course of P.E. at the hospital bedside or outpatient clinics, and for evaluating the efficacy of therapy in clinical research. 6 refs., 5 figs.
New Compton densitometer for measuring pulmonary edema
International Nuclear Information System (INIS)
Loo, B.W.; Goulding, F.S.; Simon, D.S.
1985-10-01
Pulmonary edema is the pathological increase of extravascular lung water found most often in patients with congestive heart failure and other critically ill patients who suffer from intravenous fluid overload. A non-invasive lung density monitor that is accurate, easily portable, safe and inexpensive is needed for clinical evaluation of pulmonary edema. Other researchers who have employed Compton scattering techniques generally used systems of extended size and detectors with poor energy resolution. This has resulted in significant systematic biases from multiply-scattered photons and larger errors in counting statistics at a given radiation dose to the patient. We are proposing a patented approach in which only backscattered photons are measured with a high-resolution HPGe detector in a compact system geometry. By proper design and a unique data extraction scheme, effects of the variable chest wall on lung density measurements are minimized. Preliminary test results indicate that with a radioactive source of under 30 GBq, it should be possible to make an accurate lung density measurement in one minute, with a risk of radiation exposure to the patient a thousand times smaller than that from a typical chest x-ray. The ability to make safe, frequent lung density measurements could be very helpful for monitoring the course of P.E. at the hospital bedside or outpatient clinics, and for evaluating the efficacy of therapy in clinical research. 6 refs., 5 figs
Luminosity optimization schemes in Compton experiments based on Fabry-Perot optical resonators
Directory of Open Access Journals (Sweden)
Alessandro Variola
2011-03-01
Full Text Available The luminosity of Compton x-ray and γ sources depends on the average current in electron bunches, the energy of the laser pulses, and the geometry of the particle bunch to laser pulse collisions. To obtain high power photon pulses, these can be stacked in a passive optical resonator (Fabry-Perot cavity especially when a high average flux is required. But, in this case, owing to the presence of the optical cavity mirrors, the electron bunches have to collide at an angle with the laser pulses with a consequent luminosity decrease. In this article a crab-crossing scheme is proposed for Compton sources, based on a laser amplified in a Fabry-Perot resonator, to eliminate the luminosity losses given by the crossing angle, taking into account that in laser-electron collisions only the electron bunches can be tilted at the collision point. We report the analytical study on the crab-crossing scheme for Compton gamma sources. The analytical expression for the total yield of photons generated in Compton sources with the crab-crossing scheme of collision is derived. The optimal collision angle of the bunch was found to be equal to half of the collision angle. At this crabbing angle, the maximal yield of scattered off laser photons is attained thanks to the maximization, in the collision process, of the time spent by the laser pulse in the electron bunch. Estimations for some Compton source projects are presented. Furthermore, some schemes of the optical cavities configuration are analyzed and the luminosity calculated. As illustrated, the four-mirror two- or three-dimensional scheme is the most appropriate for Compton sources.
Status of Kharkov X-ray Generator based on Compton Scattering NESTOR
Zelinsky, A.; Androsov, V.P.; Bulyak, E.V.; Drebot, I.; Gladkikh, P.I.; Grevtsev, V.; Botman, J.I.M.; Ivashchenko, V.; Karnaukhov, I.M.; Lapshin, V.I.; Markov, V.; Mocheshnikov, N.; Mytsykov, A.; Peev, F.A.; Rezaev, A.; Shcherbakov, A.; Skomorkohov, V.; Skyrda, V.; Telegin, Y.; Trotsenko, V.; Tatchyn, R.; Lebedev, B.; Agafonov, A.V.
2004-01-01
Nowadays the sources of the X-rays based on a storage ring with low beam energy and Compton scattering of intense laser beam are under development in several laboratories. In the paper the state-of-art in development and construction of cooperative project of a Kharkov advanced X-ray source NESTOR
International Nuclear Information System (INIS)
Stein, W.A.
1991-01-01
Models for producing the large ultraviolet bump, low-energy X-rays and the hypothesized F(nu) varies as the inverse of nu IR to X-ray continua of QSOs are investigated. Thermal Comptonization in a hot corona of an accretion disk appears to offer the best potential. However, under the energy input conditions in QSOs a corona will reach T above 100 million K. It must be optically thin, so as to not Comptonize the accretion disk ultraviolet emission to an unacceptable extent. However, it then cannot Comptonize a low-frequency source to an F(nu) varies as the inverse of nu continuum extending from the infrared to X-rays. An inner corona, possibly optically thick because of n varies as the sq rt of r density increase, is required for the F(nu) varies as the inverse of nu continuum, but it cannot therefore cover the UV-emitting accretion disk. However, then a Wien peak associated with this inner volume may be implied at 10 keV, contrary to observations. 42 refs
Electron Storage Ring Development for ICS Sources
Energy Technology Data Exchange (ETDEWEB)
Loewen, Roderick [Lyncean Technologies, Inc., Palo Alto, CA (United States)
2015-09-30
There is an increasing world-wide interest in compact light sources based on Inverse Compton Scattering. Development of these types of light sources includes leveraging the investment in accelerator technology first developed at DOE National Laboratories. Although these types of light sources cannot replace the larger user-supported synchrotron facilities, they offer attractive alternatives for many x-ray science applications. Fundamental research at the SLAC National Laboratory in the 1990’s led to the idea of using laser-electron storage rings as a mechanism to generate x-rays with many properties of the larger synchrotron light facilities. This research led to a commercial spin-off of this technology. The SBIR project goal is to understand and improve the performance of the electron storage ring system of the commercially available Compact Light Source. The knowledge gained from studying a low-energy electron storage ring may also benefit other Inverse Compton Scattering (ICS) source development. Better electron storage ring performance is one of the key technologies necessary to extend the utility and breadth of applications of the CLS or related ICS sources. This grant includes a subcontract with SLAC for technical personnel and resources for modeling, feedback development, and related accelerator physics studies.
Auger, E.; D'Auria, L.; Martini, M.; Chouet, B.; Dawson, P.
2006-01-01
We present a comprehensive processing tool for the real-time analysis of the source mechanism of very long period (VLP) seismic data based on waveform inversions performed in the frequency domain for a point source. A search for the source providing the best-fitting solution is conducted over a three-dimensional grid of assumed source locations, in which the Green's functions associated with each point source are calculated by finite differences using the reciprocal relation between source and receiver. Tests performed on 62 nodes of a Linux cluster indicate that the waveform inversion and search for the best-fitting signal over 100,000 point sources require roughly 30 s of processing time for a 2-min-long record. The procedure is applied to post-processing of a data archive and to continuous automatic inversion of real-time data at Stromboli, providing insights into different modes of degassing at this volcano. Copyright 2006 by the American Geophysical Union.
Compton camera imaging and the cone transform: a brief overview
Terzioglu, Fatma; Kuchment, Peter; Kunyansky, Leonid
2018-05-01
While most of Radon transform applications to imaging involve integrations over smooth sub-manifolds of the ambient space, lately important situations have appeared where the integration surfaces are conical. Three of such applications are single scatter optical tomography, Compton camera medical imaging, and homeland security. In spite of the similar surfaces of integration, the data and the inverse problems associated with these modalities differ significantly. In this article, we present a brief overview of the mathematics arising in Compton camera imaging. In particular, the emphasis is made on the overdetermined data and flexible geometry of the detectors. For the detailed results, as well as other approaches (e.g. smaller-dimensional data or restricted geometry of detectors) the reader is directed to the relevant publications. Only a brief description and some references are provided for the single scatter optical tomography. This work was supported in part by NSF DMS grants 1211463 (the first two authors), 1211521 and 141877 (the third author), as well as a College of Science of Texas A&M University grant.
Energy Technology Data Exchange (ETDEWEB)
Brambilla, Sara [Los Alamos National Laboratory; Brown, Michael J. [Los Alamos National Laboratory
2012-06-18
zones. Due to a unique source inversion technique - called the upwind collector footprint approach - the tool runs fast and the source regions can be determined in a few minutes. In this report, we provide an overview of the BERT framework, followed by a description of the source inversion technique. The Joint URBAN 2003 field experiment held in Oklahoma City that was used to validate BERT is then described. Subsequent sections describe the metrics used for evaluation, the comparison of the experimental data and BERT output, and under what conditions the BERT tool succeeds and performs poorly. Results are aggregated in different ways (e.g., daytime vs. nighttime releases, 1 vs. 2 vs. 3 hit collectors) to determine if BERT shows any systematic errors. Finally, recommendations are given for how to improve the code and procedures for optimizing performance in operational mode.
International Nuclear Information System (INIS)
Xing, Qiang; Wu, Bingfang; Zhu, Weiwei
2014-01-01
The aerodynamic roughness is one of the major parameters in describing the turbulent exchange process between terrestrial and atmosphere. Remote Sensing is recognized as an effective way to inverse this parameter at the regional scale. However, in the long time the inversion method is either dependent on the lookup table for different land covers or the Normalized Difference Vegetation Index (NDVI) factor only, which plays a very limited role in describing the spatial heterogeneity of this parameter and the evapotranspiration (ET) for different land covers. In fact, the aerodynamic roughness is influenced by different factors at the same time, including the roughness unit for hard surfaces, the vegetation dynamic growth and the undulating terrain. Therefore, this paper aims at developing an innovative aerodynamic roughness inversion method based on multi-source remote sensing data in a semiarid region, within the upper and middle reaches of Heihe River Basin. The radar backscattering coefficient was used to inverse the micro-relief of the hard surface. The NDVI was utilized to reflect the dynamic change of vegetated surface. Finally, the slope extracted from SRTM DEM (Shuttle Radar Topography Mission Digital Elevation Model) was used to correct terrain influence. The inversed aerodynamic roughness was imported into ETWatch system to validate the availability. The inversed and tested results show it plays a significant role in improving the spatial heterogeneity of the aerodynamic roughness and related ET for the experimental site
Using Compton scattering for random coincidence rejection
International Nuclear Information System (INIS)
Kolstein, M.; Chmeissani, M.
2016-01-01
The Voxel Imaging PET (VIP) project presents a new approach for the design of nuclear medicine imaging devices by using highly segmented pixel CdTe sensors. CdTe detectors can achieve an energy resolution of ≈ 1% FWHM at 511 keV and can be easily segmented into submillimeter sized voxels for optimal spatial resolution. These features help in rejecting a large part of the scattered events from the PET coincidence sample in order to obtain high quality images. Another contribution to the background are random events, i.e., hits caused by two independent gammas without a common origin. Given that 60% of 511 keV photons undergo Compton scattering in CdTe (i.e. 84% of all coincidence events have at least one Compton scattering gamma), we present a simulation study on the possibility to use the Compton scattering information of at least one of the coincident gammas within the detector to reject random coincidences. The idea uses the fact that if a gamma undergoes Compton scattering in the detector, it will cause two hits in the pixel detectors. The first hit corresponds to the Compton scattering process. The second hit shall correspond to the photoelectric absorption of the remaining energy of the gamma. With the energy deposition of the first hit, one can calculate the Compton scattering angle. By measuring the hit location of the coincident gamma, we can construct the geometric angle, under the assumption that both gammas come from the same origin. Using the difference between the Compton scattering angle and the geometric angle, random events can be rejected.
Development of a compact scintillator-based high-resolution Compton camera for molecular imaging
Energy Technology Data Exchange (ETDEWEB)
Kishimoto, A., E-mail: daphne3h-aya@ruri.waseda.jp [Research Institute for Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo (Japan); Kataoka, J.; Koide, A.; Sueoka, K.; Iwamoto, Y.; Taya, T. [Research Institute for Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo (Japan); Ohsuka, S. [Central Research Laboratory, Hamamatsu Photonics K.K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu, Shizuoka (Japan)
2017-02-11
The Compton camera, which shows gamma-ray distribution utilizing the kinematics of Compton scattering, is a promising detector capable of imaging across a wide range of energy. In this study, we aim to construct a small-animal molecular imaging system in a wide energy range by using the Compton camera. We developed a compact medical Compton camera based on a Ce-doped Gd{sub 3}Al{sub 2}Ga{sub 3}O{sub 12} (Ce:GAGG) scintillator and multi-pixel photon counter (MPPC). A basic performance confirmed that for 662 keV, the typical energy resolution was 7.4 % (FWHM) and the angular resolution was 4.5° (FWHM). We then used the medical Compton camera to conduct imaging experiments based on a 3-D imaging reconstruction algorithm using the multi-angle data acquisition method. The result confirmed that for a {sup 137}Cs point source at a distance of 4 cm, the image had a spatial resolution of 3.1 mm (FWHM). Furthermore, we succeeded in producing 3-D multi-color image of different simultaneous energy sources ({sup 22}Na [511 keV], {sup 137}Cs [662 keV], and {sup 54}Mn [834 keV]).
Aucejo, M.; Totaro, N.; Guyader, J.-L.
2010-08-01
In noise control, identification of the source velocity field remains a major problem open to investigation. Consequently, methods such as nearfield acoustical holography (NAH), principal source projection, the inverse frequency response function and hybrid NAH have been developed. However, these methods require free field conditions that are often difficult to achieve in practice. This article presents an alternative method known as inverse patch transfer functions, designed to identify source velocities and developed in the framework of the European SILENCE project. This method is based on the definition of a virtual cavity, the double measurement of the pressure and particle velocity fields on the aperture surfaces of this volume, divided into elementary areas called patches and the inversion of impedances matrices, numerically computed from a modal basis obtained by FEM. Theoretically, the method is applicable to sources with complex 3D geometries and measurements can be carried out in a non-anechoic environment even in the presence of other stationary sources outside the virtual cavity. In the present paper, the theoretical background of the iPTF method is described and the results (numerical and experimental) for a source with simple geometry (two baffled pistons driven in antiphase) are presented and discussed.
Gourdji, S. M.; Yadav, V.; Karion, A.; Mueller, K. L.; Conley, S.; Ryerson, T.; Nehrkorn, T.; Kort, E. A.
2018-04-01
Urban greenhouse gas (GHG) flux estimation with atmospheric measurements and modeling, i.e. the ‘top-down’ approach, can potentially support GHG emission reduction policies by assessing trends in surface fluxes and detecting anomalies from bottom-up inventories. Aircraft-collected GHG observations also have the potential to help quantify point-source emissions that may not be adequately sampled by fixed surface tower-based atmospheric observing systems. Here, we estimate CH4 emissions from a known point source, the Aliso Canyon natural gas leak in Los Angeles, CA from October 2015–February 2016, using atmospheric inverse models with airborne CH4 observations from twelve flights ≈4 km downwind of the leak and surface sensitivities from a mesoscale atmospheric transport model. This leak event has been well-quantified previously using various methods by the California Air Resources Board, thereby providing high confidence in the mass-balance leak rate estimates of (Conley et al 2016), used here for comparison to inversion results. Inversions with an optimal setup are shown to provide estimates of the leak magnitude, on average, within a third of the mass balance values, with remaining errors in estimated leak rates predominantly explained by modeled wind speed errors of up to 10 m s‑1, quantified by comparing airborne meteorological observations with modeled values along the flight track. An inversion setup using scaled observational wind speed errors in the model-data mismatch covariance matrix is shown to significantly reduce the influence of transport model errors on spatial patterns and estimated leak rates from the inversions. In sum, this study takes advantage of a natural tracer release experiment (i.e. the Aliso Canyon natural gas leak) to identify effective approaches for reducing the influence of transport model error on atmospheric inversions of point-source emissions, while suggesting future potential for integrating surface tower and
Laser-Compton Scattering as a Potential Electron Beam Monitor
International Nuclear Information System (INIS)
Chouffani, K.; Wells, D.; Harmon, F.; Lancaster, G.; Jones, J.
2002-01-01
LCS experiments were carried out at the Idaho Accelerator Center (IAC); sharp monochromatic x-ray lines were observed. These are produced using the so-called inverse Compton effect, whereby optical laser photons are collided with a relativistic electron beam. The back-scattered photons are then kinematically boosted to keV x-ray energies. We have first demonstrated these beams using a 20 MeV electron beam collided with a 100 MW, 7 ns Nd; YAG laser. We observed narrow LCS x-ray spectral peaks resulting from the interaction of the electron beam with the Nd; YAG laser second harmonic (532 nm). The LCS x-ray energy lines and energy deviations were measured as a function of the electron beam energy and energy-spread respectively. The results showed good agreement with the predicted valves. LCS could provide an excellent probe of electron beam energy, energy spread, transverse and longitudinal distribution and direction
Compton camera study for high efficiency SPECT and benchmark with Anger system
Fontana, M.; Dauvergne, D.; Létang, J. M.; Ley, J.-L.; Testa, É.
2017-12-01
Single photon emission computed tomography (SPECT) is at present one of the major techniques for non-invasive diagnostics in nuclear medicine. The clinical routine is mostly based on collimated cameras, originally proposed by Hal Anger. Due to the presence of mechanical collimation, detection efficiency and energy acceptance are limited and fixed by the system’s geometrical features. In order to overcome these limitations, the application of Compton cameras for SPECT has been investigated for several years. In this study we compare a commercial SPECT-Anger device, the General Electric HealthCare Infinia system with a High Energy General Purpose (HEGP) collimator, and the Compton camera prototype under development by the French collaboration CLaRyS, through Monte Carlo simulations (GATE—GEANT4 Application for Tomographic Emission—version 7.1 and GEANT4 version 9.6, respectively). Given the possible introduction of new radio-emitters at higher energies intrinsically allowed by the Compton camera detection principle, the two detectors are exposed to point-like sources at increasing primary gamma energies, from actual isotopes already suggested for nuclear medicine applications. The Compton camera prototype is first characterized for SPECT application by studying the main parameters affecting its imaging performance: detector energy resolution and random coincidence rate. The two detector performances are then compared in terms of radial event distribution, detection efficiency and final image, obtained by gamma transmission analysis for the Anger system, and with an iterative List Mode-Maximum Likelihood Expectation Maximization (LM-MLEM) algorithm for the Compton reconstruction. The results show for the Compton camera a detection efficiency increased by a factor larger than an order of magnitude with respect to the Anger camera, associated with an enhanced spatial resolution for energies beyond 500 keV. We discuss the advantages of Compton camera application
Tiampo, K. F.; Fernández, J.; Jentzsch, G.; Charco, M.; Rundle, J. B.
2004-11-01
Here we present an inversion methodology using the combination of a genetic algorithm (GA) inversion program, and an elastic-gravitational earth model to determine the parameters of a volcanic intrusion. Results from the integration of the elastic-gravitational model, a suite of FORTRAN 77 programs developed to compute the displacements due to volcanic loading, with the GA inversion code, written in the C programming language, are presented. These codes allow for the calculation of displacements (horizontal and vertical), tilt, vertical strain and potential and gravity changes on the surface of an elastic-gravitational layered Earth model due to the magmatic intrusion. We detail the appropriate methodology for examining the sensitivity of the model to variation in the constituent parameters using the GA, and present, for the first time, a Monte Carlo technique for evaluating the propagation of error through the GA inversion process. One application example is given at Mayon volcano, Philippines, for the inversion program, the sensitivity analysis, and the error evaluation. The integration of the GA with the complex elastic-gravitational model is a blueprint for an efficient nonlinear inversion methodology and its implementation into an effective tool for the evaluation of parameter sensitivity. Finally, the extension of this inversion algorithm and the error assessment methodology has important implications to the modeling and data assimilation of a number of other nonlinear applications in the field of geosciences.
DEFF Research Database (Denmark)
Yoon, Daeung; Zhdanov, Michael; Mattsson, Johan
2016-01-01
One of the major problems in the modeling and inversion of marine controlled-source electromagnetic (CSEM) data is related to the need for accurate representation of very complex geoelectrical models typical for marine environment. At the same time, the corresponding forward-modeling algorithms...... should be powerful and fast enough to be suitable for repeated use in hundreds of iterations of the inversion and for multiple transmitter/receiver positions. To this end, we have developed a novel 3D modeling and inversion approach, which combines the advantages of the finite-difference (FD......) and integral-equation (IE) methods. In the framework of this approach, we have solved Maxwell’s equations for anomalous electric fields using the FD approximation on a staggered grid. Once the unknown electric fields in the computation domain of the FD method are computed, the electric and magnetic fields...
International Nuclear Information System (INIS)
Chen Long-Quan; Qiao Yao-Jun; Ji Yue-Feng
2013-01-01
In this paper, we propose a new structure of a centralized-light-source wavelength division multiplexed passive optical network (WDM-PON) utilizing inverse-duobinary-return-to-zero (inverse-duobinary-RZ) downstream and DPSK upstream. It reuses downstream light for the upstream modulation, which retrenches lasers assembled at each optical network unit (ONU), and ultimately cuts down the cost of ONUs a great deal. Meanwhile, a 50-km-reach WDM-PON experiment with 10-Gb/s inverse-duobinary-RZ downstream and 6-Gb/s DPSK upstream is demonstrated here. It is revealed to be a novel cost-effective alternative for the next generation access network. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)
Compton suppression through rise-time analysis
International Nuclear Information System (INIS)
Selvi, S.; Celiktas, C.
2007-01-01
We studied Compton suppression for 60 Co and 137 Cs radioisotopes using a signal selection criterion based on contrasting the fall time of the signals composing the photo peak with those composing the Compton continuum. The fall time criterion is employed by using the pulse shape analysis observing the change in the fall times of the gamma-ray pulses. This change is determined by measuring the changes in the rise times related to the fall time of the scintillator and the timing signals related to the fall time of the input signals. We showed that Compton continuum suppression is achieved best via the precise timing adjustment of an analog rise-time analyzer connected to a NaI(Tl) scintillation spectrometer
Guerrero Prado, Patricio; Nguyen, Mai K.; Dumas, Laurent; Cohen, Serge X.
2017-01-01
Characterization and interpretation of flat ancient material objects, such as those found in archaeology, paleoenvironments, paleontology, and cultural heritage, have remained a challenging task to perform by means of conventional x-ray tomography methods due to their anisotropic morphology and flattened geometry. To overcome the limitations of the mentioned methodologies for such samples, an imaging modality based on Compton scattering is proposed in this work. Classical x-ray tomography treats Compton scattering data as noise in the image formation process, while in Compton scattering tomography the conditions are set such that Compton data become the principal image contrasting agent. Under these conditions, we are able, first, to avoid relative rotations between the sample and the imaging setup, and second, to obtain three-dimensional data even when the object is supported by a dense material by exploiting backscattered photons. Mathematically this problem is addressed by means of a conical Radon transform and its inversion. The image formation process and object reconstruction model are presented. The feasibility of this methodology is supported by numerical simulations.
On the line-shape analysis of Compton profiles and its application to neutron scattering
International Nuclear Information System (INIS)
Romanelli, G.; Krzystyniak, M.
2016-01-01
Analytical properties of Compton profiles are used in order to simplify the analysis of neutron Compton scattering experiments. In particular, the possibility to fit the difference of Compton profiles is discussed as a way to greatly decrease the level of complexity of the data treatment, making the analysis easier, faster and more robust. In the context of the novel method proposed, two mathematical models describing the shapes of differenced Compton profiles are discussed: the simple Gaussian approximation for harmonic and isotropic local potential, and an analytical Gauss–Hermite expansion for an anharmonic or anisotropic potential. The method is applied to data collected by VESUVIO spectrometer at ISIS neutron and muon pulsed source (UK) on Copper and Aluminium samples at ambient and low temperatures. - Highlights: • A new method to analyse neutron Compton scattering data is presented. • The method allows many corrections on the experimental data to be avoided. • The number of needed fitting parameters is drastically reduced using the new method. • Mass-selective analysis is facilitated with parametric studies benefiting the most. • Observables linked to anisotropic momentum distribution are obtained analytically.
Monitoring of laser-accelerated particle beams for hadron therapy via Compton tracking
Energy Technology Data Exchange (ETDEWEB)
Lang, C.; Thirolf, P.G. [LMU, Muenchen (Germany); Habs, D.; Tajima, T. [LMU, Muenchen (Germany); MPQ, Garching (Germany); Zoglauer, A. [SSL, Berkeley (United States); Kanbach, G.; Diehl, R. [MPE, Muenchen (Germany); Schreiber, J. [MPQ, Garching (Germany)
2011-07-01
Presently large efforts have been achieved towards the development of hadron cancer therapy based on laser-accelerated ion (p, C) beams, particularly aiming at the treatment of small tumors (few mm size). Thus precise monitoring of the ion track is mandatory. Conventional PET technology suffers from limited signal strength and precision of locating the source position. We envisage to use Compton tracking, i.e. determining energy and momentum of Compton photons and electrons, emitted along the ion track in the irradiated soft tissue. Confining the Compton cone by tracking the scattered electron will allow to significantly improve on the position resolution. Monte Carlo simulations have been performed to characterize the achievable position resolution and efficiency of a Compton camera. We estimate a resolution of 2 mm (1 mm; 5 mm) FWHM at 2 MeV (5 MeV; 0.5 MeV). An efficiency of 1.4*10{sup -3} (4.6*10{sup -6}) at 0.5 MeV (2 MeV) is envisaged. Optimized for an energy range between 0.5 MeV and 5 MeV, we plan for a system of 5 layers of double-sided Si strip detectors (for Compton electron tracking) and an additional LaBr{sub 3}:Ce calorimeter, read out by a segmented photomultiplier tube.
Nucleon structure study by virtual compton scattering
International Nuclear Information System (INIS)
Berthot, J.; Bertin, P.Y.; Breton, V.; Fonvielle, H.; Hyde-Wright, C.; Quemener, G.; Ravel, O.; Braghieri, A.; Pedroni, P.; Boeglin, W.U.; Boehm, R.; Distler, M.; Edelhoff, R.; Friedrich, J.; Geiges, R.; Jennewein, P.; Kahrau, M.; Korn, M.; Kramer, H.; Krygier, K.W.; Kunde, V.; Liesenfeld, A.; Merle, K.; Neuhausen, R.; Offermann, E.A.J.M.; Pospischil, T.; Rosner, G.; Sauer, P.; Schmieden, H.; Schardt, S.; Tamas, G.; Wagner, A.; Walcher, T.; Wolf, S.
1995-01-01
We propose to study nucleon structure by Virtual Compton Scattering using the reaction p(e,e'p)γ with the MAMI facility. We will detect the scattered electron and the recoil proton in coincidence in the high resolution spectrometers of the hall A1. Compton events will be separated from the other channels (principally π 0 production) by missing-mass reconstruction. We plan to investigate this reaction near threshold. Our goal is to measure new electromagnetic observables which generalize the usual magnetic and electric polarizabilities. (authors). 9 refs., 18 figs., 7 tabs
Conceptual design report of a compton polarimeter for CEBAF hall A
Energy Technology Data Exchange (ETDEWEB)
Bardin, G.; Cavata, C.; Neyret, D.; Frois, B.; Jorda, J.P.; Legoff, J.M.; Platchkov, S.; Steinmetz, L.; Juillard, M.; Authier, M.; Mangeot, P.; Rebourgeard, P.; Colombel, N.; Girardot, P.; Martinot, J.; Sellier, J.C.; Veyssiere, C. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. d`Astrophysique, de la Physique des Particules, de la Physique Nucleaire et de l`Instrumentation Associee; Berthot, J.; Bertin, P.Y.; Breton, V.; Fonvieille, H.; Roblin, Y. [Institut National de Physique Nucleaire et de Physique des Particules (IN2P3), 75 - Paris (France); Chen, J.P. [Continuous Electron Beam Accelerator Facility, Newport News, VA (United States)
1996-12-31
This report describes the design of the Compton polarimeter for the Cebaf electron beam in End Station A. The method of Compton polarimeter is first introduced. It is shown that at CEBAF beam intensities, the use of standard visible LASER light gives too low counting rates. An amplification scheme of the LASER beam based on a high finesse optical cavity is proposed. Expected luminosities with and without such a cavity are given. The polarimeter setup, including a 4 dipole magnet chicane, a photon and an electron detector, is detailed. The various sources of systematic error on the electron beam polarization measurement are discussed. (author). 82 refs.
Study and development of a spectrometer with Compton suppression and gamma coincidence counting
International Nuclear Information System (INIS)
Masse, D.
1990-10-01
This paper presents the characteristics of a spectrometer consisting of a Ge detector surrounded by a NaI(T1) detector that can operate in Compton-suppression and gamma-gamma coincidence modes. The criteria that led to this measurement configuration are discussed and the spectrometer performances are shown for 60 Co and 137 Cs gamma-ray sources. The results for the measurement of 189 Ir (Compton suppression) and for the measurement of 101 Rh (gamma-gamma coincidence) in the presence of other radioisotopes are given. 83 Rb and 105 Ag isotopes are also measured with this spectrometer [fr
Soft X-ray production by photon scattering in pulsating binary neutron star sources
International Nuclear Information System (INIS)
Bussard, R.W.; Meszaros, P.; Alexander, S.
1985-01-01
A new mechanism is proposed as a source of soft (less than 1 keV) radiation in binary pulsating X-ray sources, in the form of photon scattering which leaves the electron in an excited Landau level. In a plasma with parameters typical of such sources, the low-energy X-ray emissivity of this mechanism far exceeds that of bremsstrahlung. This copious source of soft photons is quite adequate to provide the seed photons needed to explain the power-law hard X-ray spectrum by inverse Comptonization on the hot electrons at the base of the accretion column. 13 references
Energy Technology Data Exchange (ETDEWEB)
Osuch, S.; Popkiewicz, M.; Szeflinski, Z.; Wilhelmi, Z. [Warsaw Univ., Inst. of Experimental Physics, Warsaw (Poland)
1995-12-31
The Bell`s inequality has been experimentally tested using angular correlation of Compton-scattered photons from annihilation of positrons emitted from {sup 22}Na source. The result shows a better agreement with the quantum mechanics predictions rather than with the Bell`s inequality. 7 refs, 5 figs, 1 tab.
Designing scheme of a γ-ray ICT system using compton back-scattering
International Nuclear Information System (INIS)
Xiao Jianmin
1998-01-01
The designing scheme of a γ ray ICT system by using Compton back-scattering is put forward. The technical norms, detector system, γ radioactive source, mechanical scanning equipment, and data acquisition and image reconstruction principle of this ICT are described
International Nuclear Information System (INIS)
Osuch, S.; Popkiewicz, M.; Szeflinski, Z.; Wilhelmi, Z.
1995-01-01
The Bell's inequality has been experimentally tested using angular correlation of Compton-scattered photons from annihilation of positrons emitted from 22 Na source. The result shows a better agreement with the quantum mechanics predictions rather than with the Bell's inequality
Energy Technology Data Exchange (ETDEWEB)
Hall, G. N., E-mail: hall98@llnl.gov; Izumi, N.; Landen, O. L.; Tommasini, R.; Holder, J. P.; Hargrove, D.; Bradley, D. K.; Lumbard, A.; Cruz, J. G.; Piston, K.; Bell, P. M.; Carpenter, A. C.; Palmer, N. E.; Felker, B.; Rekow, V.; Allen, F. V. [Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, California 94550 (United States); Lee, J. J.; Romano, E. [National Security Technologies LLC, 161 S Vasco Rd., Livermore, California 94551 (United States)
2016-11-15
Compton radiography provides a means to measure the integrity, ρR and symmetry of the DT fuel in an inertial confinement fusion implosion near peak compression. Upcoming experiments at the National Ignition Facility will use the ARC (Advanced Radiography Capability) laser to drive backlighter sources for Compton radiography experiments and will use the newly commissioned AXIS (ARC X-ray Imaging System) instrument as the detector. AXIS uses a dual-MCP (micro-channel plate) to provide gating and high DQE at the 40–200 keV x-ray range required for Compton radiography, but introduces many effects that contribute to the spatial resolution. Experiments were performed at energies relevant to Compton radiography to begin characterization of the spatial resolution of the AXIS diagnostic.
Fast image reconstruction for Compton camera using stochastic origin ensemble approach.
Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna
2011-01-01
Compton camera has been proposed as a potential imaging tool in astronomy, industry, homeland security, and medical diagnostics. Due to the inherent geometrical complexity of Compton camera data, image reconstruction of distributed sources can be ineffective and/or time-consuming when using standard techniques such as filtered backprojection or maximum likelihood-expectation maximization (ML-EM). In this article, the authors demonstrate a fast reconstruction of Compton camera data using a novel stochastic origin ensembles (SOE) approach based on Markov chains. During image reconstruction, the origins of the measured events are randomly assigned to locations on conical surfaces, which are the Compton camera analogs of lines-of-responses in PET. Therefore, the image is defined as an ensemble of origin locations of all possible event origins. During the course of reconstruction, the origins of events are stochastically moved and the acceptance of the new event origin is determined by the predefined acceptance probability, which is proportional to the change in event density. For example, if the event density at the new location is higher than in the previous location, the new position is always accepted. After several iterations, the reconstructed distribution of origins converges to a quasistationary state which can be voxelized and displayed. Comparison with the list-mode ML-EM reveals that the postfiltered SOE algorithm has similar performance in terms of image quality while clearly outperforming ML-EM in relation to reconstruction time. In this study, the authors have implemented and tested a new image reconstruction algorithm for the Compton camera based on the stochastic origin ensembles with Markov chains. The algorithm uses list-mode data, is parallelizable, and can be used for any Compton camera geometry. SOE algorithm clearly outperforms list-mode ML-EM for simple Compton camera geometry in terms of reconstruction time. The difference in computational time
Studies of coherent/Compton scattering method for bone mineral content measurement
International Nuclear Information System (INIS)
Sakurai, Kiyoko; Iwanami, Shigeru; Nakazawa, Keiji; Matsubayashi, Takashi; Imamura, Keiko.
1980-01-01
A measurement of bone mineral content by a coherent/Compton scattering method was described. A bone sample was irradiated by a collimated narrow beam of 59.6 keV gamma-rays emitted from a 300 mCi 241 Am source, and the scattered radiations were detected using a collimated pure germanium detector placed at 90 0 to the incident beam. The ratio of coherent to Compton peaks in a spectrum of the scattered radiations depends on the bone mineral content of the bone sample. The advantage of this method is that bone mineral content of a small region in a bone can be accurately measured. Assuming that bone consists of two components, protein and bone mineral, and that the mass absorption coefficient for Compton scattering is independent of material, the coherent to Compton scattering ratio is linearly related to the percentage in weight of bone mineral. A calibration curve was obtained by measuring standard samples which were mixed with Ca 3 (PO 4 ) 2 and H 2 O. The error due to the assumption about the mass absorption coefficient for Compton scattering and to the difference between true bone and standard samples was estimated to be less than 3% within the range from 10 to 60% in weight of bone mineral. The fat in bone affects an estimated value by only 1.5% when it is 20% in weight. For the clinical application of this method, the location to be analyzed should be selected before the measurement with two X-ray images viewed from the source and the detector. These views would be also used to correct the difference in absorption between coherent and Compton scattered radiations whose energies are slightly different from each other. The absorbed dose to the analyzed region was approximately 150 mrad. The time required for one measurement in this study was about 10 minutes. (author)
Comptonization effects in spherical accretion onto black holes
International Nuclear Information System (INIS)
Ipser, J.R.; Price, R.H.
1983-01-01
For spherical accretion of gas onto a black hole, dissipative heating (from magnetic reconnection), dissipation of turbulence, etc.) leads at high accretion rates to densities and temperatures at which Comptonization unavoidably plays an important role, both in determining gas temperature and in forming the emergent spectrum. A careful and reliable treatment of the interaction of the gas with the radiation field is greatly complicated by the necessity of dealing with the essentially nonlocal nature of Comptonization. We limit ourselves here to finding approximate descriptions of some observational features of such astrophysical objects with a simple, yet justifiable, Ansatz that evades the complexities of nonlocality. The results for accretion spectra are of interest, e.g., in connection with galactic halo objects (1--10 5 M/sub sun/). High mass (10 7 --10 10 M/sub sun/) cases are of interest as models for active galactic nuclei. In particular, a very natural connection between the ratio of luminosity to Eddington luminosity and the hardness of X-ray spectra emerges, suggesting that the observed X-ray hardness ratios of luminous sources are a consequence of those sources being more or less Eddington limited
A Test Case for the Source Inversion Validation: The 2014 ML 5.5 Orkney, South Africa Earthquake
Ellsworth, W. L.; Ogasawara, H.; Boettcher, M. S.
2017-12-01
The ML5.5 earthquake of August 5, 2014 occurred on a near-vertical strike slip fault below abandoned and active gold mines near Orkney, South Africa. A dense network of surface and in-mine seismometers recorded the earthquake and its aftershock sequence. In-situ stress measurements and rock samples through the damage zone and rupture surface are anticipated to be available from the "Drilling into Seismogenic Zones of M2.0-M5.5 Earthquakes in South African gold mines" project (DSeis) that is currently progressing toward the rupture zone (Science, doi: 10.1126/science.aan6905). As of 24 July, 95% of drilled core has been recovered from a 427m-section of the 1st hole from 2.9 km depth with minimal core discing and borehole breakouts. A 2nd hole is planned to intersect the fault at greater depth. Absolute differential stress will be measured along the holes and frictional characteristics of the recovered core will be determined in the lab. Surface seismic reflection data and exploration drilling from the surface down to the mining horizon at 3km depth is also available to calibrate the velocity structure above the mining horizon and image reflective geological boundaries and major faults below the mining horizon. The remarkable quality and range of geophysical data available for the Orkney earthquake makes this event an ideal test case for the Source Inversion Validation community using actual seismic data to determine the spatial and temporal evolution of earthquake rupture. We invite anyone with an interest in kinematic modeling to develop a rupture model for the Orkney earthquake. Seismic recordings of the earthquake and information on the faulting geometry can be found in Moyer et al. (2017, doi: 10.1785/0220160218). A workshop supported by the Southern California Earthquake Center will be held in the spring of 2018 to compare kinematic models. Those interested in participating in the modeling exercise and the workshop should contact the authors for additional
Czech Academy of Sciences Publication Activity Database
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, A.
2016-01-01
Roč. 9, č. 11 (2016), s. 4297-4311 ISSN 1991-959X R&D Projects: GA MŠk(CZ) 7F14287 Institutional support: RVO:67985556 Keywords : Linear inverse problem * Bayesian regularization * Source-term determination * Variational Bayes method Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.458, year: 2016 http://library.utia.cas.cz/separaty/2016/AS/tichy-0466029.pdf
Theorems of low energy in Compton scattering
International Nuclear Information System (INIS)
Chahine, J.
1984-01-01
We have obtained the low energy theorems in Compton scattering to third and fouth order in the frequency of the incident photon. Next we calculated the polarized cross section to third order and the unpolarized to fourth order in terms of partial amplitudes not covered by the low energy theorems, what will permit the experimental determination of these partial amplitudes. (Author) [pt
Compton scattering collision module for OSIRIS
Del Gaudio, Fabrizio; Grismayer, Thomas; Fonseca, Ricardo; Silva, Luís
2017-10-01
Compton scattering plays a fundamental role in a variety of different astrophysical environments, such as at the gaps of pulsars and the stagnation surface of black holes. In these scenarios, Compton scattering is coupled with self-consistent mechanisms such as pair cascades. We present the implementation of a novel module, embedded in the self-consistent framework of the PIC code OSIRIS 4.0, capable of simulating Compton scattering from first principles and that is fully integrated with the self-consistent plasma dynamics. The algorithm accounts for the stochastic nature of Compton scattering reproducing without approximations the exchange of energy between photons and unbound charged species. We present benchmarks of the code against the analytical results of Blumenthal et al. and the numerical solution of the linear Kompaneets equation and good agreement is found between the simulations and the theoretical models. This work is supported by the European Research Council Grant (ERC- 2015-AdG 695088) and the Fundao para a Céncia e Tecnologia (Bolsa de Investigao PD/BD/114323/2016).
On the Compton Twist-3 Asymmetries
International Nuclear Information System (INIS)
Korotkiyan, V.M.; Teryaev, O.V.
1994-01-01
The 'fermionic poles' contribution to the twist-3 single asymmetry in the gluon Compton process is calculated. The 'gluonic poles' existence seems to contradict the density matrix positivity. Qualitative predictions for the direct photon and jets asymmetries are presented. 13 refs., 2 figs
Compton's Kinematics and Einstein - Ehrenfest's radiation theory
International Nuclear Information System (INIS)
Barranco, A.V.; Franca, H.M.
1988-09-01
The Compton Kinematic relations are obtained from entirely classical arguments, that is, without the corpuscular concept of the photon. The calculations are nonrelativistic and result from Einstein and Ehrenfest's radiation theory modified in order to introduce the effects of the classical zero-point fileds characteristic of Stochastic Electrodynamics. (author) [pt
Constraints on low energy Compton scattering amplitudes
International Nuclear Information System (INIS)
Raszillier, I.
1979-04-01
We derive the constraints and correlations of fairly general type for Compton scattering amplitudes at energies below photoproduction threshold and fixed momentum transfer, following from (an upper bound on) the corresponding differential cross section above photoproduction threshold. The derivation involves the solution of an extremal problem in a certain space of vector - valued analytic functions. (author)
Energy Technology Data Exchange (ETDEWEB)
Julliot, C [Commissariat a l' Energie Atomique, Saclay (France).Centre d' Etudes Nucleaires
1960-07-01
This instrument, derived from the recording {gamma} spectrograph, gives better definition of photoelectric peaks by elimination of pulses caused by {gamma} photons incompletely absorbed in the scintillator (Compton effect). This system uses an original method devised by Peirson: the spectrum, devoid of photoelectric peak, supplied by a detector equipped with an anthracene scintillator, is cut off from the spectrum provided by a conventional detector equipped with a Nal (T1) scintillator. The regulation of the mechanical system, detector support and source allows the detection yields to be adjusted. The electronic system is identical in presentation with that of the recording spectrograph. (author) [French] Cet appareil derive du spectrographe {gamma} enregistreur permet d'obtenir une meilleure definition des pics photoelectriques, par elimination des impulsions provenant des photons {gamma} incompletement absorbes dans le scintillateur (effet Compton). Cet ensemble met en oeuvre une methode originale due a Peirson: le spectre, depourvu de pic photoelectrique, fourni par un detecteur, equipe avec un scintillateur d'anthracene, est retranche du spectre donne par un detecteur classique, equipe avec un scintillateur de NaI (T1). Le reglage de l'ensemble mecanique, support des detecteurs et de la source, permet d'ajuster les rendements de detection. L'ensemble electronique se presente sous un aspect identique a celui du spectrographe enregistreur. (auteur)
Stability analysis and time-step limits for a Monte Carlo Compton-scattering method
International Nuclear Information System (INIS)
Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.
2010-01-01
A Monte Carlo method for simulating Compton scattering in high energy density applications has been presented that models the photon-electron collision kinematics exactly [E. Canfield, W.M. Howard, E.P. Liang, Inverse Comptonization by one-dimensional relativistic electrons, Astrophys. J. 323 (1987) 565]. However, implementing this technique typically requires an explicit evaluation of the material temperature, which can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and develop two time-step limits that avoid undesirable behavior. The first time-step limit prevents instabilities, while the second, more restrictive time-step limit avoids both instabilities and nonphysical oscillations. With a set of numerical examples, we demonstrate the efficacy of these time-step limits.
Heers, Marcel; Chowdhury, Rasheda A; Hedrich, Tanguy; Dubeau, François; Hall, Jeffery A; Lina, Jean-Marc; Grova, Christophe; Kobayashi, Eliane
2016-01-01
Distributed inverse solutions aim to realistically reconstruct the origin of interictal epileptic discharges (IEDs) from noninvasively recorded electroencephalography (EEG) and magnetoencephalography (MEG) signals. Our aim was to compare the performance of different distributed inverse solutions in localizing IEDs: coherent maximum entropy on the mean (cMEM), hierarchical Bayesian implementations of independent identically distributed sources (IID, minimum norm prior) and spatially coherent sources (COH, spatial smoothness prior). Source maxima (i.e., the vertex with the maximum source amplitude) of IEDs in 14 EEG and 19 MEG studies from 15 patients with focal epilepsy were analyzed. We visually compared their concordance with intracranial EEG (iEEG) based on 17 cortical regions of interest and their spatial dispersion around source maxima. Magnetic source imaging (MSI) maxima from cMEM were most often confirmed by iEEG (cMEM: 14/19, COH: 9/19, IID: 8/19 studies). COH electric source imaging (ESI) maxima co-localized best with iEEG (cMEM: 8/14, COH: 11/14, IID: 10/14 studies). In addition, cMEM was less spatially spread than COH and IID for ESI and MSI (p < 0.001 Bonferroni-corrected post hoc t test). Highest positive predictive values for cortical regions with IEDs in iEEG could be obtained with cMEM for MSI and with COH for ESI. Additional realistic EEG/MEG simulations confirmed our findings. Accurate spatially extended sources, as found in cMEM (ESI and MSI) and COH (ESI) are desirable for source imaging of IEDs because this might influence surgical decision. Our simulations suggest that COH and IID overestimate the spatial extent of the generators compared to cMEM.
Design Study for Direction Variable Compton Scattering Gamma Ray
Kii, T.; Omer, M.; Negm, H.; Choi, Y. W.; Kinjo, R.; Yoshida, K.; Konstantin, T.; Kimura, N.; Ishida, K.; Imon, H.; Shibata, M.; Shimahashi, K.; Komai, T.; Okumura, K.; Zen, H.; Masuda, K.; Hori, T.; Ohgaki, H.
2013-03-01
A monochromatic gamma ray beam is attractive for isotope-specific material/medical imaging or non-destructive inspection. A laser Compton scattering (LCS) gamma ray source which is based on the backward Compton scattering of laser light on high-energy electrons can generate energy variable quasi-monochromatic gamma ray. Due to the principle of the LCS gamma ray, the direction of the gamma beam is limited to the direction of the high-energy electrons. Then the target object is placed on the beam axis, and is usually moved if spatial scanning is required. In this work, we proposed an electron beam transport system consisting of four bending magnets which can stick the collision point and control the electron beam direction, and a laser system consisting of a spheroidal mirror and a parabolic mirror which can also stick the collision point. Then the collision point can be placed on one focus of the spheroid. Thus gamma ray direction and collision angle between the electron beam and the laser beam can be easily controlled. As the results, travelling direction of the LCS gamma ray can be controlled under the limitation of the beam transport system, energy of the gamma ray can be controlled by controlling incident angle of the colliding beams, and energy spread can be controlled by changing the divergence of the laser beam.
B. de Foy; C. Wiedinmyer; J. J. Schauer
2012-01-01
Gaseous elemental mercury is a global pollutant that can lead to serious health concerns via deposition to the biosphere and bio-accumulation in the food chain. Hourly measurements between June 2004 and May 2005 in an urban site (Milwaukee, WI) show elevated levels of mercury in the atmosphere with numerous short-lived peaks as well as longer-lived episodes. The measurements are analyzed with an inverse model to obtain information about mercury emissions. The model is based on high res...
Czech Academy of Sciences Publication Activity Database
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Šindelářová, Kateřina; Hýža, M.; Stohl, A.
2017-01-01
Roč. 17, č. 20 (2017), s. 12677-12696 ISSN 1680-7316 R&D Projects: GA MŠk(CZ) 7F14287 Institutional support: RVO:67985556 Keywords : Bayesian inverse modeling * iodine-131 * consequences of the iodine release Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability Impact factor: 5.318, year: 2016 http://library.utia.cas.cz/separaty/2017/AS/tichy-0480506.pdf
Liu, Yikan
2015-01-01
In this paper, we establish a strong maximum principle for fractional diffusion equations with multiple Caputo derivatives in time, and investigate a related inverse problem of practical importance. Exploiting the solution properties and the involved multinomial Mittag-Leffler functions, we improve the weak maximum principle for the multi-term time-fractional diffusion equation to a stronger one, which is parallel to that for its single-term counterpart as expected. As a direct application, w...
Angle-averaged Compton cross sections
International Nuclear Information System (INIS)
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV
Stochastic Electrodynamics and the Compton effect
International Nuclear Information System (INIS)
Franca, H.M.; Barranco, A.V.
1987-12-01
Some of the main qualitative features of the Compton effect are tried to be described within the realm of Classical Stochastic Electrodynamics (SED). It is found indications that the combined action of the incident wave (frequency ω), the radiation reaction force and the zero point fluctuating electromagnetic fields of SED, are able to given a high average recoil velocity v/c=α/(1+α) to the charged particle. The estimate of the parameter α gives α ∼ ℎω/mc 2 where 2Πℎ is the constant and mc 2 is the rest energy of the particle. It is verified that this recoil is just that necessary to explain the frequency shift, observed in the scattered radiation as due to a classical double Doppler shift. The differential cross section for the radiation scattered by the recoiling charge using classical electromagnetism also calculated. The same expression as obtained by Compton in his fundamental work of 1923 is found. (author) [pt
Angle-averaged Compton cross sections
Energy Technology Data Exchange (ETDEWEB)
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.
Laser Compton polarimetry of proton beams
International Nuclear Information System (INIS)
Stillman, A.
1995-01-01
A need exists for non-destructive polarization measurements of the polarized proton beams in the AGS and, in the future, in RHIC. One way to make such measurements is to scatter photons from the polarized beams. Until now, such measurements were impossible because of the extremely low Compton scattering cross section from protons. Modern lasers now can provide enough photons per laser pulse not only to scatter from proton beams but also, at least in RHIC, to analyze their polarization
Future measurements of deeply virtual Compton scattering
International Nuclear Information System (INIS)
Korotkov, V.A.; Nowak, W.D.
2001-09-01
Prospects for future measurements of Deeply Virtual Compton Scattering are studied using different simple models for parameterizations of generalized parton distributions (GPDs). Measurements of the lepton charge and lepton beam helicity asymmetry will yield important input for theoretical models towards the future extraction of GPDs. The kinematics of the HERMES experiment, complemented with a recoil detector, was adopted to arrive at realistic projected statistical uncertainties. (orig.)
Colour dipoles and virtual Compton scattering
International Nuclear Information System (INIS)
McDermott, M.
2002-01-01
An analysis of Deeply Virtual Compton Scattering (DVCS) is made within the colour dipole model. We compare and contrast two models for the dipole cross-section which have been successful in describing structure function data. Both models agree with the available cross section data on DVCS from HERA. We give predictions for various azimuthal angle asymmetries in HERA kinematics and for the DVCS cross section in the THERA region. (orig.)
Helium Compton Form Factor Measurements at CLAS
Energy Technology Data Exchange (ETDEWEB)
Voutier, Eric J.-M. [Laboratoire de Physique Subatomique et Cosmologie
2013-07-01
The distribution of the parton content of nuclei, as encoded via the generalized parton distributions (GPDs), can be accessed via the deeply virtual Compton scattering (DVCS) process contributing to the cross section for leptoproduction of real photons. Similarly to the scattering of light by a material, DVCS provides information about the dynamics and the spatial structure of hadrons. The sensitivity of this process to the lepton beam polarization allows to single-out the DVCS amplitude in terms of Compton form factors that contain GPDs information. The beam spin asymmetry of the $^4$He($\\vec {\\mathrm e}$,e$' \\gamma ^4$He) process was measured in the experimental Hall B of the Jefferson Laboratory to extract the real and imaginary parts of the twist-2 Compton form factor of the $^4$He nucleus. The experimental results reported here demonstrate the relevance of this method for such a goal, and suggest the dominance of the Bethe-Heitler amplitude to the unpolarized process in the kinematic range explored by the experiment.
Beam Size Measurement by Optical Diffraction Radiation and Laser System for Compton Polarimeter
Energy Technology Data Exchange (ETDEWEB)
Liu, Chuyu [Peking Univ., Beijing (China)
2012-12-31
Beam diagnostics is an essential constituent of any accelerator, so that it is named as "organs of sense" or "eyes of the accelerator." Beam diagnostics is a rich field. A great variety of physical effects or physical principles are made use of in this field. Some devices are based on electro-magnetic influence by moving charges, such as faraday cups, beam transformers, pick-ups; Some are related to Coulomb interaction of charged particles with matter, such as scintillators, viewing screens, ionization chambers; Nuclear or elementary particle physics interactions happen in some other devices, like beam loss monitors, polarimeters, luminosity monitors; Some measure photons emitted by moving charges, such as transition radiation, synchrotron radiation monitors and diffraction radiation-which is the topic of the first part of this thesis; Also, some make use of interaction of particles with photons, such as laser wire and Compton polarimeters-which is the second part of my thesis. Diagnostics let us perceive what properties a beam has and how it behaves in a machine, give us guideline for commissioning, controlling the machine and indispensable parameters vital to physics experiments. In the next two decades, the research highlight will be colliders (TESLA, CLIC, JLC) and fourth-generation light sources (TESLA FEL, LCLS, Spring 8 FEL) based on linear accelerator. These machines require a new generation of accelerator with smaller beam, better stability and greater efficiency. Compared with those existing linear accelerators, the performance of next generation linear accelerator will be doubled in all aspects, such as 10 times smaller horizontal beam size, more than 10 times smaller vertical beam size and a few or more times higher peak power. Furthermore, some special positions in the accelerator have even more stringent requirements, such as the interaction point of colliders and wigglor of free electron lasers. Higher performance of these accelerators increases the
Beam Size Measurement by Optical Diffraction Radiation and Laser System for Compton Polarimeter
International Nuclear Information System (INIS)
Liu, Chuyu
2012-01-01
Beam diagnostics is an essential constituent of any accelerator, so that it is named as 'organs of sense' or 'eyes of the accelerator.' Beam diagnostics is a rich field. A great variety of physical effects or physical principles are made use of in this field. Some devices are based on electro-magnetic influence by moving charges, such as faraday cups, beam transformers, pick-ups; Some are related to Coulomb interaction of charged particles with matter, such as scintillators, viewing screens, ionization chambers; Nuclear or elementary particle physics interactions happen in some other devices, like beam loss monitors, polarimeters, luminosity monitors; Some measure photons emitted by moving charges, such as transition radiation, synchrotron radiation monitors and diffraction radiation-which is the topic of the first part of this thesis; Also, some make use of interaction of particles with photons, such as laser wire and Compton polarimeters-which is the second part of my thesis. Diagnostics let us perceive what properties a beam has and how it behaves in a machine, give us guideline for commissioning, controlling the machine and indispensable parameters vital to physics experiments. In the next two decades, the research highlight will be colliders (TESLA, CLIC, JLC) and fourth-generation light sources (TESLA FEL, LCLS, Spring 8 FEL) based on linear accelerator. These machines require a new generation of accelerator with smaller beam, better stability and greater efficiency. Compared with those existing linear accelerators, the performance of next generation linear accelerator will be doubled in all aspects, such as 10 times smaller horizontal beam size, more than 10 times smaller vertical beam size and a few or more times higher peak power. Furthermore, some special positions in the accelerator have even more stringent requirements, such as the interaction point of colliders and wigglor of free electron lasers. Higher performance of these accelerators increases the
Bawden, Gerald W.
2001-01-01
Coseismic leveling and triangulation observations are used to determine the faulting geometry and slip distribution of the July 21, 1952, Mw 7.3 Kern County earthquake on the White Wolf fault. A singular value decomposition inversion is used to assess the ability of the geodetic network to resolve slip along a multisegment fault and shows that the network is sufficient to resolve slip along the surface rupture to a depth of 10 km. Below 10 km, the network can only resolve dip slip near the fa...
Compton radiography, 3. Compton scinti-tomography of the chest diseases
Energy Technology Data Exchange (ETDEWEB)
Okuyama, S; Sera, K; Shishido, F; Fukuda, H [Tohoku Univ., Sendai (Japan). Research Inst. for Tuberculosis, Leprosy and Cancer; Mishina, H
1977-10-01
The compton radiography aims at collection of depth information by recording with a scinticamera those Compton rays that have resulted from scattering of a monoenergetic gamma beam by a volume of interest. Appreciably clear clinical scinti-tomograms were obtained of the chest wall, and intrathoracic structures such as the lungs, intrapulmonary pathologies, and mediastinum. This was achieved without any computer assistance for image reconstruction such as those in the case of XCT. Apparently, suitable corrections of the attenuations of the primary monoenergetic gamma rays and secondary Compton rays would greatly improve the image quality, and imaging time and radiation exposure as well. This technic is simple in principle, relatively cheap, and yet prospective of development of stereoptic fluoroscopy that would be extremely helpful in guiding such procedures as visceral biopsies.
Compton suppression system at Penn State Radiation Science and Engineering Center
International Nuclear Information System (INIS)
Cetiner, N.Oe.; Uenlue, K.; Brenizer, J.S.
2008-01-01
A Compton suppression system is used to reduce the contribution of scattered gamma-rays that originate within the HPGe detector to the gamma ray spectrum. The HPGe detector is surrounded by an assembly of guard detectors, usually NaI(Tl). The HPGe and NaI(Tl) detectors are operated in anti-coincidence mode. The NaI(Tl) guard detector detects the photons that Compton scatter within, and subsequently escape from the HPGe detector. Since these photons are correlated with the partial energy deposition within the detector, much of the resulting Compton continuum can be subtracted from the spectrum reducing the unwanted background in gamma-ray spectra. A commercially available Compton suppression spectrometer (CSS) was purchased from Canberra Industries and tested at the Radiation Science and Engineering Center at Penn State University. The PSU-CSS includes a reverse bias HPGe detector, four annulus NaI(Tl) detectors, a NaI(Tl) plug detector, detector shields, data acquisition electronics, and a data processing computer. The HPGe detector is n-type with 54% relative efficiency. The guard detectors form an annulus with 9-inch diameter and 9-inch height, and have a plug detector that goes into/out of the annulus with the help of a special lift apparatus to raise/lower. The detector assembly is placed in a shielding cave. State-of-the-art electronics and software are used. The system was tested using standard sources, neutron activated NIST SRM sample and Dendrochronologically Dated Tree Ring samples. The PSU-CSS dramatically improved the peak-to-Compton ratio, up to 1000 : 1 for the 137 Cs source. (author)
International Nuclear Information System (INIS)
Huddleston, A.L.; Weaver, J.
1980-01-01
Several methods important in the clinical diagnosis of skeletal diseases have been proposed for the determination of bone mass, such as photon absorptiometry, computed tomography, and neutron activation. None of these present methods provides for the determination of the physical density of bone. In the Radiological Physics Research Laboratory at the University of Virginia, the principles of Compton scattering are being investigated with the intent of determining the electron density and the physical density of human bone. A Compton-scatter densitometer has been constructed for the in vivo density determination of the femoral head. This technique utilizes of collimated low energy gamma source and detector system. The method has been tested in cadavers and in known density samples and has an accuracy of 2 %. A second densitometer has been designed for the in vivo determination of electron density of the vertebrae based upon a new technique which employs dual energy Compton scattering in the spinal column. These systems will be discussed; and the principles of dual energy Compton scatter densitometry will be presented. The importance of these isotope techniques and the feasibility of in vivo density determination in the vertebrae and femoral head will be discussed as they relate to clinical diagnosis and research. (author)
Energy Technology Data Exchange (ETDEWEB)
Lee, Taewoong; Lee, Hyounggun; Kim, Younghak; Lee, Wonho [Korea University, Seoul (Korea, Republic of)
2017-07-15
The performance of a Compton imager using a single three-dimensional position-sensitive LYSO scintillator detector was estimated using a Monte Carlo simulation. The Compton imager consisted of a single LYSO scintillator with a pixelized structure. The size of the scintillator and each pixel were 1.3 × 1.3 × 1.3 cm{sup 3} and 0.3 × 0.3 × 0.3 cm{sup 3}, respectively. The order of γ-ray interactions was determined based on the deposited energies in each detector. After the determination of the interaction sequence, various types of reconstruction algorithms such as simple back-projection, filtered back-projection, and list-mode maximum-likelihood expectation maximization (LM-MLEM) were applied and compared with each other in terms of their angular resolution and signal-tonoise ratio (SNR) for several γ-ray energies. The LM-MLEM reconstruction algorithm exhibited the best performance for Compton imaging in maintaining high angular resolution and SNR. The two sources of {sup 137}Cs (662 keV) could be distinguishable if they were more than 17 ◦ apart. The reconstructed Compton images showed the precise position and distribution of various radiation isotopes, which demonstrated the feasibility of the monitoring of nuclear materials in homeland security and radioactive waste management applications.
Fast cooling of bunches in compton storage rings*
Bulyak, E; Zimmermann, F
2011-01-01
We propose an enhancement of laser radiative cooling by utilizing laser pulses of small spatial and temporal dimensions, which interact only with a fraction of an electron bunch circulating in a storage ring. We studied the dynamics of such electron bunch when laser photons scatter off the electrons at a collision point placed in a section with nonzero dispersion. In this case of ‘asymmetric cooling’, the stationary energy spread is much smaller than under conditions of regular scattering where the laser spot size is larger than the electron beam; and the synchrotron oscillations are damped faster. Coherent oscillations of large amplitude may be damped within one synchrotron period, so that this method can support the rapid successive injection of many bunches in longitudinal phase space for stacking purposes. Results of extensive simulations are presented for the performance optimization of Compton gamma-ray sources and damping rings.
Second LaBr3 Compton Telescope Prototype
International Nuclear Information System (INIS)
Llosa, Gabriela; Cabello, Jorge; Gillam, John-E.; Lacasta, Carlos; Oliver, Josep F.; Rafecas, Magdalena; Solaz, Carles; Solevi, Paola; Stankova, Vera; Torres-Espallardo, Irene; Trovato, Marco
2013-06-01
A Compton telescope for dose delivery monitoring in hadron therapy is under development at IFIC Valencia within the European project ENVISION. The telescope will consist of three detector planes, each one composed of a LaBr 3 continuous scintillator crystal coupled to four silicon photomultiplier (SiPM) arrays. After the development of a first prototype which served to assess the principle, a second prototype with larger crystals has been assembled and is being tested. The current version of the prototype consists of two detector layers, each one composed of a 32.5 x 35 mm 2 crystal coupled to four SiPM arrays. The VATA64HDR16 ASIC has been employed as front-end electronics. The readout system consists of a custom made data acquisition board. Tests with point-like sources have been carried out in the laboratory, assessing the correct functioning of the device. The system optimization is ongoing. (authors)
Compton imaging with the PorGamRays spectrometer
Energy Technology Data Exchange (ETDEWEB)
Judson, D.S., E-mail: dsj@ns.ph.liv.ac.uk [Department of Physics, University of Liverpool, Liverpool L697ZE (United Kingdom); Boston, A.J. [Department of Physics, University of Liverpool, Liverpool L697ZE (United Kingdom); Coleman-Smith, P.J. [STFC Daresbury Laboratory, Daresbury, Warrington WA4 4AD (United Kingdom); Cullen, D.M. [Schuster Laboratory, University of Manchester, Manchester M13 9PL (United Kingdom); Hardie, A. [STFC Rutherford Appleton Laboratory, Harwell Science and Innovation Campus, Didcot OX11 0QX (United Kingdom); Harkness, L.J. [Department of Physics, University of Liverpool, Liverpool L697ZE (United Kingdom); Jones, L.L. [STFC Rutherford Appleton Laboratory, Harwell Science and Innovation Campus, Didcot OX11 0QX (United Kingdom); Jones, M. [Department of Physics, University of Liverpool, Liverpool L697ZE (United Kingdom); Lazarus, I. [STFC Daresbury Laboratory, Daresbury, Warrington WA4 4AD (United Kingdom); Nolan, P.J. [Department of Physics, University of Liverpool, Liverpool L697ZE (United Kingdom); Pucknell, V. [STFC Daresbury Laboratory, Daresbury, Warrington WA4 4AD (United Kingdom); Rigby, S.V. [Department of Physics, University of Liverpool, Liverpool L697ZE (United Kingdom); Seller, P. [STFC Rutherford Appleton Laboratory, Harwell Science and Innovation Campus, Didcot OX11 0QX (United Kingdom); Scraggs, D.P. [Department of Physics, University of Liverpool, Liverpool L697ZE (United Kingdom); Simpson, J. [STFC Daresbury Laboratory, Daresbury, Warrington WA4 4AD (United Kingdom); Slee, M.; Sweeney, A. [Department of Physics, University of Liverpool, Liverpool L697ZE (United Kingdom)
2011-10-01
The PorGamRays project aims to develop a portable gamma-ray detection system with both spectroscopic and imaging capabilities. The system is designed around a stack of thin Cadmium Zinc Telluride (CZT) detectors. The imaging capability utilises the Compton camera principle. Each detector is segmented into 100 pixels which are read out through custom designed Application Specific Integrated Circuits (ASICs). This device has potential applications in the security, decommissioning and medical fields. This work focuses on the near-field imaging performance of a lab-based demonstrator consisting of two pixelated CZT detectors, each of which is bonded to a NUCAM II ASIC. Measurements have been made with point {sup 133}Ba and {sup 57}Co sources located {approx}35mm from the surface of the scattering detector. Position resolution of {approx}20mm FWHM in the x and y planes is demonstrated.
Generation of laser Compton gamma-rays using Compact ERL
International Nuclear Information System (INIS)
Shizuma, Toshiyuki; Hajima, Ryoichi; Nagai, Ryoji; Hayakawa, Takehito; Mori, Michiaki; Seya, Michio
2015-01-01
Nondestructive isotope-specific assay system using nuclear resonance fluorescence has been developed at JAEA. In this system, intense, mono-energetic laser Compton scattering (LCS) gamma-rays are generated by combining an energy recovery linac (ERL) and laser enhancement cavity. As technical development for such an intense gamma-ray source, we demonstrated generation of LCS gamma-rays using Compact ERL (supported by the Ministry of Education, Culture, Sports, Science and Technology) developed in collaboration with KEK. We also measured X-ray fluorescence for elements near iron region by using mono-energetic LCS gamma-rays. In this presentation, we will show results of the experiment and future plan. (author)
A Compton camera prototype for prompt gamma medical imaging
Directory of Open Access Journals (Sweden)
Thirolf P.G.
2016-01-01
Full Text Available Compton camera prototype for a position-sensitive detection of prompt γ rays from proton-induced nuclear reactions is being developed in Garching. The detector system allows to track the Comptonscattered electrons. The camera consists of a monolithic LaBr3:Ce scintillation absorber crystal, read out by a multi-anode PMT, preceded by a stacked array of 6 double-sided silicon strip detectors acting as scatterers. The LaBr3:Ce crystal has been characterized with radioactive sources. Online commissioning measurements were performed with a pulsed deuteron beam at the Garching Tandem accelerator and with a clinical proton beam at the OncoRay facility in Dresden. The determination of the interaction point of the photons in the monolithic crystal was investigated.
Hybrid coded aperture and Compton imaging using an active mask
International Nuclear Information System (INIS)
Schultz, L.J.; Wallace, M.S.; Galassi, M.C.; Hoover, A.S.; Mocko, M.; Palmer, D.M.; Tornga, S.R.; Kippen, R.M.; Hynes, M.V.; Toolin, M.J.; Harris, B.; McElroy, J.E.; Wakeford, D.; Lanza, R.C.; Horn, B.K.P.; Wehe, D.K.
2009-01-01
The trimodal imager (TMI) images gamma-ray sources from a mobile platform using both coded aperture (CA) and Compton imaging (CI) modalities. In this paper we will discuss development and performance of image reconstruction algorithms for the TMI. In order to develop algorithms in parallel with detector hardware we are using a GEANT4 [J. Allison, K. Amako, J. Apostolakis, H. Araujo, P.A. Dubois, M. Asai, G. Barrand, R. Capra, S. Chauvie, R. Chytracek, G. Cirrone, G. Cooperman, G. Cosmo, G. Cuttone, G. Daquino, et al., IEEE Trans. Nucl. Sci. NS-53 (1) (2006) 270] based simulation package to produce realistic data sets for code development. The simulation code incorporates detailed detector modeling, contributions from natural background radiation, and validation of simulation results against measured data. Maximum likelihood algorithms for both imaging methods are discussed, as well as a hybrid imaging algorithm wherein CA and CI information is fused to generate a higher fidelity reconstruction.
Laser Compton polarimetry at JLab and MAMI. A status report
International Nuclear Information System (INIS)
Diefenbach, J.; Imai, Y.; Han Lee, J.; Maas, F.; Taylor, S.
2007-01-01
For modern parity violation experiments it is crucial to measure and monitor the electron beam polarization continuously. In the recent years different high-luminosity concepts, for precision Compton backscattering polarimetry, have been developed, to be used at modern CW electron beam accelerator facilities. As Compton backscattering polarimetry is free of intrinsic systematic uncertainties, it can be a superior alternative to other polarimetry techniques such as Moeller and Mott scattering. State-of-the-art high-luminosity Compton backscattering designs currently in use and under development at JLab and Mainz are compared to each other. The latest results from the Mainz A4 Compton polarimeter are presented. (orig.)
Induced Compton scattering effects in radiation transport approximations
International Nuclear Information System (INIS)
Gibson, D.R. Jr.
1982-01-01
In this thesis the method of characteristics is used to solve radiation transport problems with induced Compton scattering effects included. The methods used to date have only addressed problems in which either induced Compton scattering is ignored, or problems in which linear scattering is ignored. Also, problems which include both induced Compton scattering and spatial effects have not been considered previously. The introduction of induced scattering into the radiation transport equation results in a quadratic nonlinearity. Methods are developed to solve problems in which both linear and nonlinear Compton scattering are important. Solutions to scattering problems are found for a variety of initial photon energy distributions
Induced Compton-scattering effects in radiation-transport approximations
International Nuclear Information System (INIS)
Gibson, D.R. Jr.
1982-02-01
The method of characteristics is used to solve radiation transport problems with induced Compton scattering effects included. The methods used to date have only addressed problems in which either induced Compton scattering is ignored, or problems in which linear scattering is ignored. Also, problems which include both induced Compton scattering and spatial effects have not been considered previously. The introduction of induced scattering into the radiation transport equation results in a quadratic nonlinearity. Methods are developed to solve problems in which both linear and nonlinear Compton scattering are important. Solutions to scattering problems are found for a variety of initial photon energy distributions
The effect of Compton scattering on quantitative SPECT imaging
International Nuclear Information System (INIS)
Beck, J.W.; Jaszczak, R.J.; Starmer, C.F.
1982-01-01
A Monte Carlo code has been developed to simulate the response of a SPECT system. The accuracy of the code has been verified and has been used in this research to study and illustrate the effects of Compton scatter on quantitative SPECT measurements. The effects of Compton scattered radiation on gamma camera response have been discussed by several authors, and will be extended to rotating gamma camera SPECT systems. The unique feature of this research includes the pictorial illustration of the Compton scattered and the unscattered components of the photopeak data on SPECT imaging by simulating phantom studies with and without Compton scatter
Virtual Compton scattering off protons at moderately large momentum transfer
International Nuclear Information System (INIS)
Kroll, P.
1996-01-01
The amplitudes for virtual Compton scattering off protons are calculated within the framework of the diquark model in which protons are viewed as being built up by quarks and diquarks. The latter objects are treated as quasi-elementary constituents of the proton. Virtual Compton scattering, electroproduction off protons and the Bethe-Heitler contamination are photon discussed for various kinematical situations. We particularly emphasize the role of the electron asymmetry for measuring the relative phases between the virtual Compton and the Bethe-Heitler amplitudes. It is also shown that the model is able to describe very well the experimental data for real Compton scattering off protons. (orig.)
Kinematic source inversion of the 2017 Puebla-Morelos, Mexico earthquake (2017/09/19, Mw.7.1)
Iglesias, A.; Castro-Artola, O.; Hjorleifsdottir, V.; Singh, S. K.; Ji, C.; Franco-Sánchez, S. I.
2017-12-01
On September 19th 2017, an Mw 7.1 earthquake struck Central Mexico, causing severe damage in the epicentral region, especially in several small and medium size houses as well as historical buildings like churches and government offices. In Mexico City, at a distance of 100km from the epicenter, 38 buildings collapsed. Authorities reported that 369 persons were killed by the earthquake (> 60% in the Mexico City). We determined the hypocentral location (18.406N, 98.706W, d=57km), from regional data, situating this earthquake inside the subducted Cocos Plate, with a normal fault mechanism (Globalcmt: =300°, =44°, and =-82°). In this presentation we show the the slip on the fault plane, determined by 1) a frequency-domain inversion using local and regional acceleration records that have been numerically integrated twice and bandpass filtered between 2 and 30, and 2) a wavelet domain inversion using teleseismic body and surface-waves, filtered between 1-100 s and 50-150 s respectively, as well as static offsets. In both methods the fault plane is divided into subfaults, and for each subfault we invert for the average slip, and timing of initiation of slip. In the first method the slip direction is fixed to the ? direction and we invert for the rise time. In the second method the direction of slip is estimated, with values between -90 and +90 allowed, and the time history is an asymmetric cosine time function, for which we determine the "rise" and "fall" durations. For both methods, synthetic seismograms, based on the GlobalCMT focal mechanism, are computed for each subfault-station pair and for three components (Z, N-S, EW). Preliminary results, using local data, show some slip concentrated close to the hypocentral location and a large patch 20 km in NW direction far from the origin. Using teleseismic data, it is difficult to distinguish between the two fault planes, as the waveforms are equally well fit using either one of them. However, both are consistent with a
Matoza, Robin S.; Chouet, Bernard A.; Dawson, Phillip B.; Shearer, Peter M.; Haney, Matthew M.; Waite, Gregory P.; Moran, Seth C.; Mikesell, T. Dylan
2015-01-01
Long-period (LP, 0.5-5 Hz) seismicity, observed at volcanoes worldwide, is a recognized signature of unrest and eruption. Cyclic LP “drumbeating” was the characteristic seismicity accompanying the sustained dome-building phase of the 2004–2008 eruption of Mount St. Helens (MSH), WA. However, together with the LP drumbeating was a near-continuous, randomly occurring series of tiny LP seismic events (LP “subevents”), which may hold important additional information on the mechanism of seismogenesis at restless volcanoes. We employ template matching, phase-weighted stacking, and full-waveform inversion to image the source mechanism of one multiplet of these LP subevents at MSH in July 2005. The signal-to-noise ratios of the individual events are too low to produce reliable waveform-inversion results, but the events are repetitive and can be stacked. We apply network-based template matching to 8 days of continuous velocity waveform data from 29 June to 7 July 2005 using a master event to detect 822 network triggers. We stack waveforms for 359 high-quality triggers at each station and component, using a combination of linear and phase-weighted stacking to produce clean stacks for use in waveform inversion. The derived source mechanism pointsto the volumetric oscillation (~10 m3) of a subhorizontal crack located at shallow depth (~30 m) in an area to the south of Crater Glacier in the southern portion of the breached MSH crater. A possible excitation mechanism is the sudden condensation of metastable steam from a shallow pressurized hydrothermal system as it encounters cool meteoric water in the outer parts of the edifice, perhaps supplied from snow melt.
Zheng, Ao; Wang, Mingfeng; Yu, Xiangwei; Zhang, Wenbo
2018-03-01
On 2016 November 13, an Mw 7.8 earthquake occurred in the northeast of the South Island of New Zealand near Kaikoura. The earthquake caused severe damages and great impacts on local nature and society. Referring to the tectonic environment and defined active faults, the field investigation and geodetic evidence reveal that at least 12 fault sections ruptured in the earthquake, and the focal mechanism is one of the most complicated in historical earthquakes. On account of the complexity of the source rupture, we propose a multisegment fault model based on the distribution of surface ruptures and active tectonics. We derive the source rupture process of the earthquake using the kinematic waveform inversion method with the multisegment fault model from strong-motion data of 21 stations (0.05-0.35 Hz). The inversion result suggests the rupture initiates in the epicentral area near the Humps fault, and then propagates northeastward along several faults, until the offshore Needles fault. The Mw 7.8 event is a mixture of right-lateral strike and reverse slip, and the maximum slip is approximately 19 m. The synthetic waveforms reproduce the characteristics of the observed ones well. In addition, we synthesize the coseismic offsets distribution of the ruptured region from the slips of upper subfaults in the fault model, which is roughly consistent with the surface breaks observed in the field survey.
International Nuclear Information System (INIS)
Takeuchi, Mitsuo; Wada, Shigeru; Takahashi, Hiroyuki; Hayashi, Kazuhiko; Murayama, Yoji
2000-09-01
At the research reactor such as JRR-3M, the operation management is carried out in order to ensure safe operation, for example, the excess reactivity is measured regularly and confirmed that it satisfies a safety condition. The excess reactivity is calculated using control rod position in criticality and control rod worth measured by a positive period method (P.P method), the conventional inverse kinetic method (IK method) and so on. The neutron source, however, influences measurement results and brings in a measurement error. A new IK method considering the influence of the steady neutron sources is proposed and applied to the JRR-3M. This report shows that the proposed IK method measures control rod worth more precisely than a conventional IK method. (author)
Karve, Pranav M.; Fathi, Arash; Poursartip, Babak; Kallivokas, Loukas F.
2016-01-01
We discuss a methodology for computing the optimal spatio-temporal characteristics of surface wave sources necessary for delivering wave energy to a targeted subsurface formation. The wave stimulation is applied to the target formation to enhance
Energy Technology Data Exchange (ETDEWEB)
Pison, I
2005-12-15
Atmospheric pollution at a regional scale is the result of various interacting processes: emissions, chemistry, transport, mixing and deposition of gaseous species. The forecast of air quality is then performed by models, in which the emissions are taken into account through inventories. The simulated pollutant concentrations depend highly on the emissions that are used. Now inventories that represent them have large uncertainties. Since it would be difficult today to improve their building methodologies, there remains the possibility of adding information to existing inventories. The optimization of emissions uses the information that is available in measurements to get the inventory that minimizes the difference between simulated and measured concentrations. A method for the inversion of anthropogenic emissions at a regional scale, using network measurements and based on the CHIMERE model and its adjoint, was developed and validated. A kriging technique allows us to optimize the use of the information available in the concentration space. Repeated kriging-optimization cycles increase the quality of the results. A dynamical spatial aggregation technique makes it possible to further reduce the size of the problem. The NO{sub x} emissions from the inventory elaborated by AIRPARIF for the Paris area were inverted during the summers of 1998 and 1999, the events of the ESQUIF campaign being studied in detail. The optimization reduces large differences between simulated and measured concentrations. Generally, however, the confidence level of the results decreases with the density of the measurement network. Therefore, the results with the higher confidence level correspond to the most intense emission fluxes of the Paris area. On the whole domain, the corrections to the average emitted mass and to the matching time profiles are consistent with the estimate of 15% obtained during the ESQUIF campaign. (author)
Energy Technology Data Exchange (ETDEWEB)
Pison, I.
2005-12-15
Atmospheric pollution at a regional scale is the result of various interacting processes: emissions, chemistry, transport, mixing and deposition of gaseous species. The forecast of air quality is then performed by models, in which the emissions are taken into account through inventories. The simulated pollutant concentrations depend highly on the emissions that are used. Now inventories that represent them have large uncertainties. Since it would be difficult today to improve their building methodologies, there remains the possibility of adding information to existing inventories. The optimization of emissions uses the information that is available in measurements to get the inventory that minimizes the difference between simulated and measured concentrations. A method for the inversion of anthropogenic emissions at a regional scale, using network measurements and based on the CHIMERE model and its adjoint, was developed and validated. A kriging technique allows us to optimize the use of the information available in the concentration space. Repeated kriging-optimization cycles increase the quality of the results. A dynamical spatial aggregation technique makes it possible to further reduce the size of the problem. The NO{sub x} emissions from the inventory elaborated by AIRPARIF for the Paris area were inverted during the summers of 1998 and 1999, the events of the ESQUIF campaign being studied in detail. The optimization reduces large differences between simulated and measured concentrations. Generally, however, the confidence level of the results decreases with the density of the measurement network. Therefore, the results with the higher confidence level correspond to the most intense emission fluxes of the Paris area. On the whole domain, the corrections to the average emitted mass and to the matching time profiles are consistent with the estimate of 15% obtained during the ESQUIF campaign. (author)
Energy Technology Data Exchange (ETDEWEB)
Trainham, R., E-mail: trainhcp@nv.doe.gov; Tinsley, J. [Special Technologies Laboratory of National Security Technologies, LLC, 5520 Ekwill Street, Santa Barbara, California 93111 (United States)
2014-06-15
Energy asymmetry of inter-detector crosstalk from Compton scattering can be exploited to infer the direction to a gamma source. A covariance approach extracts the correlated crosstalk from data streams to estimate matched signals from Compton gammas split over two detectors. On a covariance map the signal appears as an asymmetric cross diagonal band with axes intercepts at the full photo-peak energy of the original gamma. The asymmetry of the crosstalk band can be processed to determine the direction to the radiation source. The technique does not require detector shadowing, masking, or coded apertures, thus sensitivity is not sacrificed to obtain the directional information. An angular precision of better than 1° of arc is possible, and processing of data streams can be done in real time with very modest computing hardware.
Experimental confirmation of neoclassical Compton scattering theory
Energy Technology Data Exchange (ETDEWEB)
Aristov, V. V., E-mail: aristov@iptm.ru [Russian Academy of Sciences, Institute of Microelectronics Technology and High Purity Materials (Russian Federation); Yakunin, S. N. [National Research Centre “Kurchatov Institute” (Russian Federation); Despotuli, A. A. [Russian Academy of Sciences, Institute of Microelectronics Technology and High Purity Materials (Russian Federation)
2013-12-15
Incoherent X-ray scattering spectra of diamond and silicon crystals recorded on the BESSY-2 electron storage ring have been analyzed. All spectral features are described well in terms of the neoclassical scattering theory without consideration for the hypotheses accepted in quantum electrodynamics. It is noted that the accepted tabular data on the intensity ratio between the Compton and Rayleigh spectral components may significantly differ from the experimental values. It is concluded that the development of the general theory (considering coherent scattering, incoherent scattering, and Bragg diffraction) must be continued.
A Compton polarimeter for CEBAF Hall A
Energy Technology Data Exchange (ETDEWEB)
Bardin, G; Cavata, C; Frois, B; Juillard, M; Kerhoas, S; Languillat, J C; Legoff, J M; Mangeot, P; Martino, J; Platchkov, S; Rebourgeard, P; Vernin, P; Veyssiere, C; CEBAF Hall A Collaboration
1994-09-01
The physic program at CEBAF Hall A includes several experiments using 4 GeV polarized electron beam: parity violation in electron elastic scattering from proton and {sup 4}He, electric form factor of the proton by recoil polarization, neutron spin structure function at low Q{sup 2}. Some of these experiments will need beam polarization measurement and monitoring with an accuracy close to 4%, for beam currents ranging from 100 nA to 100 microA. A project of a Compton Polarimeter that will meet these requirements is presented. It will comprise four dipoles and a symmetric cavity consisting of two identical mirrors. 1 fig., 10 refs.
Cork quality estimation by using Compton tomography
International Nuclear Information System (INIS)
Brunetti, Antonio; Cesareo, Roberto; Golosio, Bruno; Luciano, Pietro; Ruggero, Alessandro
2002-01-01
The quality control of cork stoppers is mandatory in order to guarantee the perfect conservation of the wine. Several techniques have been developed but until now the quality control was essentially related to the status of the external surface. Thus possible cracks or holes inside the stopper will be hidden. In this paper a new technique based on X-ray Compton tomography is described. It is a non-destructive technique that allows one to reconstruct and visualize the cross-section of the cork stopper analyzed, and so to put in evidence the presence of internal imperfections. Some results are reported and compared with visual classification
Transverse tomography by Compton scattering scintigraphy
International Nuclear Information System (INIS)
Askienazy, S.; Lumbroso, J.; Lacaille, J.M.; Fredy, D.; Constans, J.P.; Barritault, L.
The technique of tomography by Compton-scattering was applied to the exploration of the brain. Studies were carried out on phantoms and on patients and the first results are considered highly encouraging. On a phantom skull, holes at a depth of 7 cm are visible even on analogue documents and whatever their position with regard to the bone. On patients the ventricle cavities were revealed and comparisons with gas encephalograpy showed good agreement between the two techniques. The studies on phantoms also testified to the very low dose received by the patient: about 300 mRem for 2 million counts per section [fr
Cork quality estimation by using Compton tomography
Brunetti, A; Golosio, B; Luciano, P; Ruggero, A
2002-01-01
The quality control of cork stoppers is mandatory in order to guarantee the perfect conservation of the wine. Several techniques have been developed but until now the quality control was essentially related to the status of the external surface. Thus possible cracks or holes inside the stopper will be hidden. In this paper a new technique based on X-ray Compton tomography is described. It is a non-destructive technique that allows one to reconstruct and visualize the cross-section of the cork stopper analyzed, and so to put in evidence the presence of internal imperfections. Some results are reported and compared with visual classification.
Shielded radiography with a laser-driven MeV-energy X-ray source
Energy Technology Data Exchange (ETDEWEB)
Chen, Shouyuan; Golovin, Grigory [Department of Physics and Astronomy, University of Nebraska-Lincoln, Lincoln, NE 68588 (United States); Miller, Cameron [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); Haden, Daniel; Banerjee, Sudeep; Zhang, Ping; Liu, Cheng; Zhang, Jun; Zhao, Baozhen [Department of Physics and Astronomy, University of Nebraska-Lincoln, Lincoln, NE 68588 (United States); Clarke, Shaun; Pozzi, Sara [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); Umstadter, Donald, E-mail: donald.umstadter@unl.edu [Department of Physics and Astronomy, University of Nebraska-Lincoln, Lincoln, NE 68588 (United States)
2016-01-01
We report the results of experimental and numerical-simulation studies of shielded radiography using narrowband MeV-energy X-rays from a compact all-laser-driven inverse-Compton-scattering X-ray light source. This recently developed X-ray light source is based on a laser-wakefield accelerator with ultra-high-field gradient (GeV/cm). We demonstrate experimentally high-quality radiographic imaging (image contrast of 0.4 and signal-to-noise ratio of 2:1) of a target composed of 8-mm thick depleted uranium shielded by 80-mm thick steel, using a 6-MeV X-ray beam with a spread of 45% (FWHM) and 10{sup 7} photons in a single shot. The corresponding dose of the X-ray pulse measured in front of the target is ∼100 nGy/pulse. Simulations performed using the Monte-Carlo code MCNPX accurately reproduce the experimental results. These simulations also demonstrate that the narrow bandwidth of the Compton X-ray source operating at 6 and 9 MeV leads to a reduction of deposited dose as compared to broadband bremsstrahlung sources with the same end-point energy. The X-ray beam’s inherently low-divergence angle (∼mrad) is advantageous and effective for interrogation at standoff distance. These results demonstrate significant benefits of all-laser driven Compton X-rays for shielded radiography.
Lippi, Giuseppe; Salvagno, Gian Luca; Danese, Elisa; Lima-Oliveira, Gabriel; Brocco, Giorgio; Guidi, Gian Cesare
2014-09-25
This study was planned to establish whether random orientation of gel tubes after centrifugation may impair sample quality. Eight gel tubes were collected from 17 volunteers: 2 Becton Dickinson (BD) serum tubes, 2 Terumo serum tubes, 2 BD lithium heparin tubes and 2 Terumo lithium heparin tubes. One patient's tube for each category was kept in a vertical, closure-up position for 90 min ("upright"), whereas paired tubes underwent bottom-up inversion every 15 min, for 90 min ("inverted"). Immediately after this period of time, 14 clinical chemistry analytes, serum indices and complete blood count were then assessed in all tubes. Significant increases were found for phosphate and lipaemic index in all inverted tubes, along with AST, calcium, cholesterol, LDH, potassium, hemolysis index, leukocytes, erythrocytes and platelets limited to lithium heparin tubes. The desirable quality specifications were exceeded for AST, LDH, and potassium in inverted lithium heparin tubes. Residual leukocytes, erythrocytes, platelets and cellular debris were also significantly increased in inverted lithium heparin tubes. Lithium heparin gel tubes should be maintained in a vertical, closure-up position after centrifugation. Copyright © 2014 Elsevier B.V. All rights reserved.
CONSTRAINTS ON COMPTON-THICK WINDS FROM BLACK HOLE ACCRETION DISKS: CAN WE SEE THE INNER DISK?
International Nuclear Information System (INIS)
Reynolds, Christopher S.
2012-01-01
Strong evidence is emerging that winds can be driven from the central regions of accretion disks in both active galactic nuclei and Galactic black hole binaries. Direct evidence for highly ionized, Compton-thin inner-disk winds comes from observations of blueshifted (v ∼ 0.05-0.1c) iron-K X-ray absorption lines. However, it has been suggested that the inner regions of black hole accretion disks can also drive Compton-thick winds—such winds would enshroud the inner disk, preventing us from seeing direct signatures of the accretion disk (i.e., the photospheric thermal emission, or the Doppler/gravitationally broadened iron Kα line). Here, we show that, provided the source is sub-Eddington, the well-established wind-driving mechanisms fail to launch a Compton-thick wind from the inner disk. For the accelerated region of the wind to be Compton-thick, the momentum carried in the wind must exceed the available photon momentum by a factor of at least 2/λ, where λ is the Eddington ratio of the source, ruling out radiative acceleration unless the source is very close to the Eddington limit. Compton-thick winds also carry large mass fluxes, and a consideration of the connections between the wind and the disk shows this to be incompatible with magneto-centrifugal driving. Finally, thermal driving of the wind is ruled out on the basis of the large Compton radii that typify black hole systems. In the absence of some new acceleration mechanisms, we conclude that the inner regions of sub-Eddington accretion disks around black holes are indeed naked.
Study of Compton scattering influence in cardiac SPECT images
International Nuclear Information System (INIS)
Munhoz, A.C.L.; Abe, R.; Zanardo, E.L.; Robilotta, C.C.
1992-01-01
The reduction effect from Compton fraction in the quality of and image is evaluated, with two ways of acquisition data: one, with the window of energetic analyser dislocated over the photopeak and the other, with two windows, one over the Compton contribution and the other, placed in the center over the photopeak. (C.G.C.)
Compton scattering of photons from electrons bound in light elements
International Nuclear Information System (INIS)
Bergstrom, P.M. Jr.
1994-01-01
A brief introduction to the topic of Compton scattering from bound electrons is presented. The fundamental nature of this process in understanding quantum phenomena is reviewed. Methods for accurate theoretical evaluation of the Compton scattering cross section are presented. Examples are presented for scattering of several keV photons from helium
International Nuclear Information System (INIS)
Namatame, Hirofumi; Taniguchi, Masaki
1994-01-01
Photoelectron spectroscopy is regarded as the most powerful means since it can measure almost perfectly the occupied electron state. On the other hand, inverse photoelectron spectroscopy is the technique for measuring unoccupied electron state by using the inverse process of photoelectron spectroscopy, and in principle, the similar experiment to photoelectron spectroscopy becomes feasible. The development of the experimental technology for inverse photoelectron spectroscopy has been carried out energetically by many research groups so far. At present, the heightening of resolution of inverse photoelectron spectroscopy, the development of inverse photoelectron spectroscope in which light energy is variable and so on are carried out. But the inverse photoelectron spectroscope for vacuum ultraviolet region is not on the market. In this report, the principle of inverse photoelectron spectroscopy and the present state of the spectroscope are described, and the direction of the development hereafter is groped. As the experimental equipment, electron guns, light detectors and so on are explained. As the examples of the experiment, the inverse photoelectron spectroscopy of semimagnetic semiconductors and resonance inverse photoelectron spectroscopy are reported. (K.I.)
Energy Technology Data Exchange (ETDEWEB)
Antoniassi, M.; Conceicao, A.L.C. [Departamento de Fisica-Faculdade de Filosofia Ciencias e Letras de Ribeirao Preto-Universidade de Sao Paulo, Ribeirao Preto, Sao Paulo (Brazil); Poletti, M.E., E-mail: poletti@ffclrp.usp.br [Departamento de Fisica-Faculdade de Filosofia Ciencias e Letras de Ribeirao Preto-Universidade de Sao Paulo, Ribeirao Preto, Sao Paulo (Brazil)
2012-07-15
Electron densities of 33 samples of normal (adipose and fibroglangular) and neoplastic (benign and malignant) human breast tissues were determined through Compton scattering data using a monochromatic synchrotron radiation source and an energy dispersive detector. The area of Compton peaks was used to determine the electron densities of the samples. Adipose tissue exhibits the lowest values of electron density whereas malignant tissue the highest. The relationship with their histology was discussed. Comparison with previous results showed differences smaller than 4%. - Highlights: Black-Right-Pointing-Pointer Electron density of normal and neoplastic breast tissues was measured using Compton scattering. Black-Right-Pointing-Pointer Monochromatic synchrotron radiation was used to obtain the Compton scattering data. Black-Right-Pointing-Pointer The area of Compton peaks was used to determine the electron densities of samples. Black-Right-Pointing-Pointer Adipose tissue shows the lowest electron density values whereas the malignant tissue the highest. Black-Right-Pointing-Pointer Comparison with previous results showed differences smaller than 4%.
An experimental method for the optimization of anti-Compton spectrometer
Badran, H M
1999-01-01
The reduction of the Compton continuum can be achieved using a Compton suppression shield. For the first time, an experimental method is purposed for estimating the optimum dimensions of such a shield. The method can also provide information on the effect of the air gap, source geometry, gamma-ray energy, etc., on the optimum dimension of the active shield. The method employs the measurements of the Compton suppression efficiency in two dimensions using small size scintillation detectors. Two types of scintillation materials; NaI(Tl) and NE-102A plastic scintillators, were examined. The effect of gamma-ray energy and source geometry were also investigated using sup 1 sup 3 sup 7 Cs and sup 6 sup 0 Co sources with three geometries; point, cylindrical, and Marinelli shapes. The results indicate the importance of both NaI(Tl) and NE-102A guard detectors in surrounding the main detector rather than the distance above it. The ratio between the part of the guard detector above the surface of the main detector to th...
Proton compton scattering in the resonance region
International Nuclear Information System (INIS)
Ishii, Takanobu.
1979-12-01
Differential cross sections of the proton Compton scattering have been measured in the energy range between 400 and 1150 MeV at CMS angles of 130 0 , 100 0 and 70 0 . The recoil proton was detected with a magnetic spectrometer using multi-wire proportional chambers and wire spark chambers. In coincidence with the proton, the scattered photon was detected with a lead glass Cerenkov counter of the total absorption type with a lead plate converter, and horizontal and vertical scintillation counter hodoscopes. The background due to the neutral pion photoproduction, was subtracted by using the kinematic relations between the scattered photon and the recoil proton. Theoretical calculations based on an isobar model with two components, that is, the resonance plus background, were done, and the photon couplings of the second resonance region were determined firstly from the proton Compton data. The results are that the helicity 1/2 photon couplings of P 11 (1470) and S 11 (1535), and the helicity 3/2 photon coupling of D 13 (1520) are consistent with those determined from the single pion photoproduction data, but the helicity 1/2 photon coupling of D 13 (1520) has a somewhat larger value than that from the single pion photoproduction data. (author)
Virtual compton scattering at low energy
International Nuclear Information System (INIS)
Lhuillier, D.
1997-09-01
The work described in this PhD is a study of the Virtual Compton scattering (VCS) off the proton at low energy, below pion production threshold. Our experiment has been carried out at MAMI in the collaboration with the help of two high resolution spectrometers. Experimentally, the VCS process is the electroproduction of photons off a liquid hydrogen target. First results of data analysis including radiative corrections are presented and compared with low energy theorem prediction. VCS is an extension of the Real Compton Scattering. The virtuality of the incoming photon allows us to access new observables of the nucleon internal structure which are complementarity to the elastic form factors: the generalized polarizabilities (GP). They are function of the squared invariant mass of the virtual photo. The mass limit of these observables restore the usual electric and magnetic polarizabilities. Our experiment is the first measurement of the VCS process at a virtual photon mass equals 0.33 Ge V square. The experimental development presents the analysis method. The high precision needed in the absolute cross-section measurement required an accurate estimate of radiative corrections to the VCS. This new calculation, which has been performed in the dimensional regulation scheme, composes the theoretical part of this thesis. At low q', preliminary results agree with low energy theorem prediction. At higher q', substraction of low energy theorem contribution to extract GP is discussed. (author)
Deeply virtual Compton scattering at Jefferson Laboratory
Energy Technology Data Exchange (ETDEWEB)
Biselli, Angela S. [Fairfield University - Department of Physics 1073 North Benson Road, Fairfield, CT 06430, USA; Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)
2016-08-01
The generalized parton distributions (GPDs) have emerged as a universal tool to describe hadrons in terms of their elementary constituents, the quarks and the gluons. Deeply virtual Compton scattering (DVCS) on a proton or neutron ($N$), $e N \\rightarrow e' N' \\gamma$, is the process more directly interpretable in terms of GPDs. The amplitudes of DVCS and Bethe-Heitler, the process where a photon is emitted by either the incident or scattered electron, can be accessed via cross-section measurements or exploiting their interference which gives rise to spin asymmetries. Spin asymmetries, cross sections and cross-section differences can be connected to different combinations of the four leading-twist GPDs (${H}$, ${E}$, ${\\tilde{H}}$, ${\\tilde{E}}$) for each quark flavors, depending on the observable and on the type of target. This paper gives an overview of recent experimental results obtained for DVCS at Jefferson Laboratory in the halls A and B. Several experiments have been done extracting DVCS observables over large kinematics regions. Multiple measurements with overlapping kinematic regions allow to perform a quasi-model independent extraction of the Compton form factors, which are GPDs integrals, revealing a 3D image of the nucleon.
Compton scattering and γ-quanta monochromatization
International Nuclear Information System (INIS)
Goryachev, B.I.; Shevchenko, V.G.
1979-01-01
The γ-quanta monochromatization method is proposed for sdudying high-excited states and mechanisms of nuclei photodisintegration. The method is based on the properties of photon Compton scattering. It permits to obtain high energy resolution without accurate analysis of the particle energies taking part in the scattering process. A possible design of the compton γ- monochromator is presented. The γ-quanta scatterer of the elements with a small nucleus charge (e.g. LiH) is placed inside the β-spectrometer of low resolution. The monochromator is expected to operate in the γ-beam of the high-current synchrotron, and it provides for a rather good energy resolution rho(W) while studying the high-excited nucleus states (rho(W) approximately 2% in the range of the giant dipole resonance). With the γ-quanta energy growth rho(W) increases as Wsup(0.6). The monochromator permits to obtain high statistical accuracy for a smaller period of time (at a considerably better energy resolution) than while working with a bremsstrahlung spectrum. The yield of quasimonochromatic photons related to the ΔW(ΔW = rho(W)W) range of energy resolution increases as Wsup(0.6). This fact makes it promjssing to use monochromator in the energy range considerably exceeding the characteristic energy of the gigantic dipole resonance
Relativistic wave equations and compton scattering
International Nuclear Information System (INIS)
Sutanto, S.H.; Robson, B.A.
1998-01-01
Full text: Recently an eight-component relativistic wave equation for spin-1/2 particles was proposed.This equation was obtained from a four-component spin-1/2 wave equation (the KG1/2 equation), which contains second-order derivatives in both space and time, by a procedure involving a linearisation of the time derivative analogous to that introduced by Feshbach and Villars for the Klein-Gordon equation. This new eight-component equation gives the same bound-state energy eigenvalue spectra for hydrogenic atoms as the Dirac equation but has been shown to predict different radiative transition probabilities for the fine structure of both the Balmer and Lyman a-lines. Since it has been shown that the new theory does not always give the same results as the Dirac theory, it is important to consider the validity of the new equation in the case of other physical problems. One of the early crucial tests of the Dirac theory was its application to the scattering of a photon by a free electron: the so-called Compton scattering problem. In this paper we apply the new theory to the calculation of Compton scattering to order e 2 . It will be shown that in spite of the considerable difference in the structure of the new theory and that of Dirac the cross section is given by the Klein-Nishina formula
Thermal Comptonization in standard accretion disks
International Nuclear Information System (INIS)
Maraschi, L.; Molendi, S.
1990-01-01
The standard model of an accretion disk is considered. The temperature in the inner region is computed assuming that the radiated power derives from Comptonized photons, produced in a homogeneous single-temperature plasma, supported by radiation pressure. The photon production mechanisms are purely thermal, including ion-electron bremsstrahlung, bound-free and bound-bound processes, and e-e bremsstrahlung. Pair production is not included, which limits the validity of the treatment to kT less than 60 keV. Three different approximations for the effects of Comptonization on the energy loss are used, yielding temperatures which agree within 50 percent. The maximum temperature is very sensitive to the accretion rate and viscosity parameters, ranging, for a 10 to the 8th solar mass black hole, between 0.1 and 50 keV for m between 0.1 and 1 and alpha between 0.1 and 1 and, for a 10-solar-mass black hole, between 0.6 and 60 keV for m between 0.1 and 0.9 and alpha between 0.1 and 0.5. For high viscosity and accretion rates, the emission spectra show a flat component following a peak corresponding to the temperature of the innermost optically thick annulus. 28 refs
Compton effect thermally activated depolarization dosimeter
Moran, Paul R.
1978-01-01
A dosimetry technique for high-energy gamma radiation or X-radiation employs the Compton effect in conjunction with radiation-induced thermally activated depolarization phenomena. A dielectric material is disposed between two electrodes which are electrically short circuited to produce a dosimeter which is then exposed to the gamma or X radiation. The gamma or X-radiation impinging on the dosimeter interacts with the dielectric material directly or with the metal composing the electrode to produce Compton electrons which are emitted preferentially in the direction in which the radiation was traveling. A portion of these electrons becomes trapped in the dielectric material, consequently inducing a stable electrical polarization in the dielectric material. Subsequent heating of the exposed dosimeter to the point of onset of ionic conductivity with the electrodes still shorted through an ammeter causes the dielectric material to depolarize, and the depolarization signal so emitted can be measured and is proportional to the dose of radiation received by the dosimeter.
Description of the double Compton spectrometer at Mayence MPI
International Nuclear Information System (INIS)
Borchert, H.; Ziegler, B.; Gimm, H.; Zieger, A.; Hughes, R.J.; Ahrens, J.
1977-01-01
The double Compton spectrometer of the Laboratories of the Mayence Linear Accelerator consists in two identical magnetic spectrometers, in which the electron scattered forwards by photons through a Compton process, are detected. The spectrometers have been built to detect 10-350 MeV photons and, as they involve thin Compton targets, their effect on the photon flux is negligible. They are put in cascade inside a well collimated bremsstrahlung beam. A thick absorbing target (max. thickness 2m) can be inserted inside the beam. The facility is outlined, some special properties of the accelerator and the bremsstrahlung beam are given. The properties of a Compton spectrometer involving eleven detectors are given by eleven response functions giving the relations between the photon flux impinging the Compton target and the counting rates of the detectors for a given adjustment of the magnets. A Monte-Carlo method is used for the calculation together with analytical methods neglecting the multiple scattering effects [fr
Compton recoil electron tracking with silicon strip detectors
International Nuclear Information System (INIS)
O'Neill, T.J.; Ait-Ouamer, F.; Schwartz, I.; Tumer, O.T.; White, R.S.; Zych, A.D.
1992-01-01
The application of silicon strip detectors to Compton gamma ray astronomy telescopes is described in this paper. The Silicon Compton Recoil Telescope (SCRT) tracks Compton recoil electrons in silicon strip converters to provide a unique direction for Compton scattered gamma rays above 1 MeV. With strip detectors of modest positional and energy resolutions of 1 mm FWHM and 3% at 662 keV, respectively, 'true imaging' can be achieved to provide an order of magnitude improvement in sensitivity to 1.6 x 10 - 6 γ/cm 2 -s at 2 MeV. The results of extensive Monte Carlo calculations of recoil electrons traversing multiple layers of 200 micron silicon wafers are presented. Multiple Coulomb scattering of the recoil electron in the silicon wafer of the Compton interaction and the next adjacent wafer is the basic limitation to determining the electron's initial direction
Šumanovac, Franjo; Orešković, Jasna
2018-06-01
On the selected cases, Gotalovec in the area of Pannonian basin and Baška in the Dinaridic karst area, that are representing a common hydrogeological model in both regions of Croatia, CSAMT data together with data of other geophysical methods (electrical resistivity tomography, electrical sounding and seismic reflection) enabled the definition of a reliable prognostic geological model. The model consists of carbonate aquifer which underlies an impermeable thick package of clastic deposits. There are great variations of the dolomitic aquifer depths in the Gotalovec area due to strong tectonic activity, while in the Baška area depth changes are caused by the layer folding. The CSAMT method provides the most complete data on lithological and structural relationships in cases of hydrogeological targets deeper than 100 m. Based on the presented models we can conclude that the CSAMT method can provide greater exploration depth than electrical resistivity tomography (ERT) and can be considered as a fundamental geophysical method for exploration of buried carbonate aquifers, deeper than 100 m. But, the CSAMT research may demonstrate its advantages only in the case of very dense layout of CSAMT stations (25-50 m), due to the greater sensitivity to noise in relation to resistivity methods. Interpretation of CSAMT data is more complex in relation to resistivity methods, and a forward modelling method sometimes gives better results than an inversion due to possibility of the use of additional data acquired by other geophysical methods (ERT, electrical sounding and seismic reflection). At greater depths, the resolution of all electrical methods including the CSAMT method is significantly reduced, and seismic reflection can be very useful to resolve deeper lithological interfaces.
International Nuclear Information System (INIS)
Esmaeili-sani, Vahid; Moussavi-zarandi, Ali; Boghrati, Behzad; Afarideh, Hossein
2012-01-01
Geophysical bore-hole data represent the physical properties of rocks, such as density and formation lithology, as a function of depth in a well. Properties of rocks are obtained from gamma ray transport logs. Transport of gamma rays, from a 137 Cs point gamma source situated in a bore-hole tool, through rock media to detectors, has been simulated using a GEANT4 radiation transport code. The advanced Compton scattering concepts were used to gain better analyses about well formation. The simulation and understanding of advanced Compton scattering highly depends on how accurately the effects of Doppler broadening and Rayleigh scattering are taken into account. A Monte Carlo package that simulates the gamma-gamma well logging tools based on GEANT4 advanced low energy Compton scattering (GALECS).
Energy Technology Data Exchange (ETDEWEB)
Esmaeili-sani, Vahid, E-mail: vaheed_esmaeely80@yahoo.com [Department of Nuclear Engineering and Physics, Amirkabir University of Technology, P.O. Box 4155-4494, Tehran (Iran, Islamic Republic of); Moussavi-zarandi, Ali; Boghrati, Behzad; Afarideh, Hossein [Department of Nuclear Engineering and Physics, Amirkabir University of Technology, P.O. Box 4155-4494, Tehran (Iran, Islamic Republic of)
2012-02-01
Geophysical bore-hole data represent the physical properties of rocks, such as density and formation lithology, as a function of depth in a well. Properties of rocks are obtained from gamma ray transport logs. Transport of gamma rays, from a {sup 137}Cs point gamma source situated in a bore-hole tool, through rock media to detectors, has been simulated using a GEANT4 radiation transport code. The advanced Compton scattering concepts were used to gain better analyses about well formation. The simulation and understanding of advanced Compton scattering highly depends on how accurately the effects of Doppler broadening and Rayleigh scattering are taken into account. A Monte Carlo package that simulates the gamma-gamma well logging tools based on GEANT4 advanced low energy Compton scattering (GALECS).
International Nuclear Information System (INIS)
Burkhard, N.R.
1979-01-01
The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables
Science Flight Program of the Nuclear Compton Telescope
Boggs, Steven
This is the lead proposal for this program. We are proposing a 5-year program to perform the scientific flight program of the Nuclear Compton Telescope (NCT), consisting of a series of three (3) scientific balloon flights. NCT is a balloon-borne, wide-field telescope designed to survey the gamma-ray sky (0.2-5 MeV), performing high-resolution spectroscopy, wide-field imaging, and polarization measurements. NCT has been rebuilt as a ULDB payload under the current 2-year APRA grant. (In that proposal we stated our goal was to return at this point to propose the scientific flight program.) The NCT rebuild/upgrade is on budget and schedule to achieve flight-ready status in Fall 2013. Science: NCT will map the Galactic positron annihilation emission, shedding more light on the mysterious concentration of this emission uncovered by INTEGRAL. NCT will survey Galactic nucleosynthesis and the role of supernova and other stellar populations in the creation and evolution of the elements. NCT will map 26-Al and positron annihilation with unprecedented sensitivity and uniform exposure, perform the first mapping of 60-Fe, search for young, hidden supernova remnants through 44-Ti emission, and enable a host of other nuclear astrophysics studies. NCT will also study compact objects (in our Galaxy and AGN) and GRBs, providing novel measurements of polarization as well as detailed spectra and light curves. Design: NCT is an array of germanium gamma-ray detectors configured in a compact, wide-field Compton telescope configuration. The array is shielded on the sides and bottom by an active anticoincidence shield but is open to the 25% of the sky above for imaging, spectroscopy, and polarization measurements. The instrument is mounted on a zenith-pointed gondola, sweeping out ~50% of the sky each day. This instrument builds off the Compton telescope technique pioneered by COMPTEL on the Compton Gamma Ray Observatory. However, by utilizing modern germanium semiconductor strip detectors
Development of a Compton camera for prompt-gamma medical imaging
Aldawood, S.; Thirolf, P. G.; Miani, A.; Böhmer, M.; Dedes, G.; Gernhäuser, R.; Lang, C.; Liprandi, S.; Maier, L.; Marinšek, T.; Mayerhofer, M.; Schaart, D. R.; Lozano, I. Valencia; Parodi, K.
2017-11-01
A Compton camera-based detector system for photon detection from nuclear reactions induced by proton (or heavier ion) beams is under development at LMU Munich, targeting the online range verification of the particle beam in hadron therapy via prompt-gamma imaging. The detector is designed to be capable to reconstruct the photon source origin not only from the Compton scattering kinematics of the primary photon, but also to allow for tracking of the secondary Compton-scattered electrons, thus enabling a γ-source reconstruction also from incompletely absorbed photon events. The Compton camera consists of a monolithic LaBr3:Ce scintillation crystal, read out by a multi-anode PMT acting as absorber, preceded by a stacked array of 6 double-sided silicon strip detectors as scatterers. The detector components have been characterized both under offline and online conditions. The LaBr3:Ce crystal exhibits an excellent time and energy resolution. Using intense collimated 137Cs and 60Co sources, the monolithic scintillator was scanned on a fine 2D grid to generate a reference library of light amplitude distributions that allows for reconstructing the photon interaction position using a k-Nearest Neighbour (k-NN) algorithm. Systematic studies were performed to investigate the performance of the reconstruction algorithm, revealing an improvement of the spatial resolution with increasing photon energy to an optimum value of 3.7(1)mm at 1.33 MeV, achieved with the Categorical Average Pattern (CAP) modification of the k-NN algorithm.
Li, Yongxing; Smith, Richard S.
2018-03-01
We present two examples of using the contrast source inversion (CSI) method to invert synthetic radio-imaging (RIM) data and field data. The synthetic model has two isolated conductors (one perfect conductor and one moderate conductor) embedded in a layered background. After inversion, we can identify the two conductors on the inverted image. The shape of the perfect conductor is better resolved than the shape of the moderate conductor. The inverted conductivity values of the two conductors are approximately the same, which demonstrates that the conductivity values cannot be correctly interpreted from the CSI results. The boundaries and the tilts of the upper and the lower conductive layers on the background can also be inferred from the results, but the centre parts of conductive layers in the inversion results are more conductive than the parts close to the boreholes. We used the straight-ray tomographic imaging method and the CSI method to invert the RIM field data collected using the FARA system between two boreholes in a mining area in Sudbury, Canada. The RIM data include the amplitude and the phase data collected using three frequencies: 312.5 kHz, 625 kHz and 1250 kHz. The data close to the ground surface have high amplitude values and complicated phase fluctuations, which are inferred to be contaminated by the reflected or refracted electromagnetic (EM) fields from the ground surface, and are removed for all frequencies. Higher-frequency EM waves attenuate more quickly in the subsurface environment, and the locations where the measurements are dominated by noise are also removed. When the data are interpreted with the straight-ray method, the images differ substantially for different frequencies. In addition, there are some unexpected features in the images, which are difficult to interpret. Compared with the straight-ray imaging results, the inversion results with the CSI method are more consistent for different frequencies. On the basis of what we learnt
Han, Shin-Chan; Riva, Ricccardo; Sauber, Jeanne; Okal, Emile
2013-01-01
We quantify gravity changes after great earthquakes present within the 10 year long time series of monthly Gravity Recovery and Climate Experiment (GRACE) gravity fields. Using spherical harmonic normal-mode formulation, the respective source parameters of moment tensor and double-couple were estimated. For the 2004 Sumatra-Andaman earthquake, the gravity data indicate a composite moment of 1.2x10(exp 23)Nm with a dip of 10deg, in agreement with the estimate obtained at ultralong seismic periods. For the 2010 Maule earthquake, the GRACE solutions range from 2.0 to 2.7x10(exp 22)Nm for dips of 12deg-24deg and centroid depths within the lower crust. For the 2011 Tohoku-Oki earthquake, the estimated scalar moments range from 4.1 to 6.1x10(exp 22)Nm, with dips of 9deg-19deg and centroid depths within the lower crust. For the 2012 Indian Ocean strike-slip earthquakes, the gravity data delineate a composite moment of 1.9x10(exp 22)Nm regardless of the centroid depth, comparing favorably with the total moment of the main ruptures and aftershocks. The smallest event we successfully analyzed with GRACE was the 2007 Bengkulu earthquake with M(sub 0) approx. 5.0x10(exp 21)Nm. We found that the gravity data constrain the focal mechanism with the centroid only within the upper and lower crustal layers for thrust events. Deeper sources (i.e., in the upper mantle) could not reproduce the gravity observation as the larger rigidity and bulk modulus at mantle depths inhibit the interior from changing its volume, thus reducing the negative gravity component. Focal mechanisms and seismic moments obtained in this study represent the behavior of the sources on temporal and spatial scales exceeding the seismic and geodetic spectrum.
Nayak, Avinash; Dreger, Douglas S.
2018-05-01
The formation of a large sinkhole at the Napoleonville salt dome (NSD), Assumption Parish, Louisiana, caused by the collapse of a brine cavern, was accompanied by an intense and complex sequence of seismic events. We implement a grid-search approach to compute centroid locations and point-source moment tensor (MT) solutions of these seismic events using ˜0.1-0.3 Hz displacement waveforms and synthetic Green's functions computed using a 3D velocity model of the western edge of the NSD. The 3D model incorporates the currently known approximate geometry of the salt dome and the overlying anhydrite-gypsum cap rock, and features a large velocity contrast between the high velocity salt dome and low velocity sediments overlying and surrounding it. For each possible location on the source grid, Green's functions (GFs) to each station were computed using source-receiver reciprocity and the finite-difference seismic wave propagation software SW4. We also establish an empirical method to rigorously assess uncertainties in the centroid location, MW and source type of these events under evolving network geometry, using the results of synthetic tests with hypothetical events and real seismic noise. We apply the methods on the entire duration of data (˜6 months) recorded by the temporary US Geological Survey network. During an energetic phase of the sequence from 24-31 July 2012 when 4 stations were operational, the events with the best waveform fits are primarily located at the western edge of the salt dome at most probable depths of ˜0.3-0.85 km, close to the horizontal positions of the cavern and the future sinkhole. The data are fit nearly equally well by opening crack MTs in the high velocity salt medium or by isotropic volume-increase MTs in the low velocity sediment layers. We find that data recorded by 6 stations during 1-2 August 2012, right before the appearance of the sinkhole, indicate that some events are likely located in the lower velocity media just outside the
Boer, Marie
2017-09-01
Generalized Parton Distributions (GPDs) contain the correlation between the parton's longitudinal momentum and their transverse distribution. They are accessed through hard exclusive processes, such as Deeply Virtual Compton Scattering (DVCS). DVCS has already been measured in several experiments and several models allow for extracting GPDs from these measurements. Timelike Compton Scattering (TCS) is, at leading order, the time-reversal equivalent process to DVCS and accesses GPDs at the same kinematics. Comparing GPDs extracted from DVCS and TCS is a unique way for proving GPD universality. Combining fits from the two processes will also allow for better constraining the GPDs. We will present our method for extracting GPDs from DVCS and TCS pseudo-data. We will compare fit results from the two processes in similar conditions and present what can be expected in term of contraints on GPDs from combined fits.
A Compton suppressed detector multiplicity trigger based digital DAQ for gamma-ray spectroscopy
Das, S.; Samanta, S.; Banik, R.; Bhattacharjee, R.; Basu, K.; Raut, R.; Ghugre, S. S.; Sinha, A. K.; Bhattacharya, S.; Imran, S.; Mukherjee, G.; Bhattacharyya, S.; Goswami, A.; Palit, R.; Tan, H.
2018-06-01
The development of a digitizer based pulse processing and data acquisition system for γ-ray spectroscopy with large detector arrays is presented. The system is based on 250 MHz 12-bit digitizers, and is triggered by a user chosen multiplicity of Compton suppressed detectors. The logic for trigger generation is similar to the one practised for analog (NIM/CAMAC) pulse processing electronics, while retaining the fast processing merits of the digitizer system. Codes for reduction of data acquired from the system have also been developed. The system has been tested with offline studies using radioactive sources as well as in the in-beam experiments with an array of Compton suppressed Clover detectors. The results obtained therefrom validate its use in spectroscopic efforts for nuclear structure investigations.
A didactic experiment showing the Compton scattering by means of a clinical gamma camera.
Amato, Ernesto; Auditore, Lucrezia; Campennì, Alfredo; Minutoli, Fabio; Cucinotta, Mariapaola; Sindoni, Alessandro; Baldari, Sergio
2017-06-01
We describe a didactic approach aimed to explain the effect of Compton scattering in nuclear medicine imaging, exploiting the comparison of a didactic experiment with a gamma camera with the outcomes from a Monte Carlo simulation of the same experimental apparatus. We employed a 99m Tc source emitting 140.5keV photons, collimated in the upper direction through two pinholes, shielded by 6mm of lead. An aluminium cylinder was placed on the source at 50mm of distance. The energy of the scattered photons was measured on the spectra acquired by the gamma camera. We observed that the gamma ray energy measured at each step of rotation gradually decreased from the characteristic energy of 140.5keV at 0° to 102.5keV at 120°. A comparison between the obtained data and the expected results from the Compton formula and from the Monte Carlo simulation revealed a full agreement within the experimental error (relative errors between -0.56% and 1.19%), given by the energy resolution of the gamma camera. Also the electron rest mass has been evaluated satisfactorily. The experiment was found useful in explaining nuclear medicine residents the phenomenology of the Compton scattering and its importance in the nuclear medicine imaging, and it can be profitably proposed during the training of medical physics residents as well. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Gregory, R.B.
1991-01-01
We have recently described modifications to the program CONTIN for the solution of Fredholm integral equations with convoluted kernels of the type that occur in the analysis of positron annihilation lifetime data. In this article, modifications to the program to correct for source terms in the sample and reference decay curves and for shifts in the position of the zero-time channel of the sample and reference data are described. Unwanted source components, expressed as a discrete sum of exponentials, may be removed from both the sample and reference data by modification of the sample data alone, without the need for direct knowledge of the instrument resolution function. Shifts in the position of the zero-time channel of up to half the channel width of the multichannel analyzer can be corrected. Analyses of computer-simulated test data indicate that the quality of the reconstructed annihilation rate probability density functions is improved by employing a refernce material with a short lifetime and indicate that reference materials which generate free positrons by quenching positronium formation (i.e. strong oxidizing agents) have lifetimes that are too long (400-450 ps) to provide reliable estimates of the lifetime parameters for the shortlived components with the methods described here. Well-annealed single crystals of metals with lifetimes less than 200 ps, such as molybdenum (123 ps) and aluminium (166 ps) do not introduce significant errors in estimates of the lifetime parameters and are to be preferred as reference materials. The performance of our modified version of CONTIN is illustrated by application to positron annihilation in polytetrafluoroethylene. (orig.)
Ingram, WT
2012-01-01
Inverse limits provide a powerful tool for constructing complicated spaces from simple ones. They also turn the study of a dynamical system consisting of a space and a self-map into a study of a (likely more complicated) space and a self-homeomorphism. In four chapters along with an appendix containing background material the authors develop the theory of inverse limits. The book begins with an introduction through inverse limits on [0,1] before moving to a general treatment of the subject. Special topics in continuum theory complete the book. Although it is not a book on dynamics, the influen
Light Higgs production at the Compton Collider
International Nuclear Information System (INIS)
Jikia, G.; Soeldner-Rembold, S.
2000-01-01
We have studied the production of a light Higgs boson with a mass of 120 GeV in photon-photon collisions at a Compton collider. The event generator for the backgrounds to a Higgs signal due to b-barb and c-barc heavy quark pair production in polarized γγ collisions is based on a complete next-to-leading order (NLO) perturbative QCD calculation. For J z = 0 the large double-logarithmic corrections up to four loops are also included. It is shown that the two-photon width of the Higgs boson can be measured with high statistical accuracy of about 2% for integrated γγ luminosity in the hard part of the spectrum of 40 fb -1 . As a result the total Higgs boson width can be calculated in a model independent way to an accuracy of about 14%
Deuteron Compton scattering below pion photoproduction threshold
Levchuk, M. I.; L'vov, A. I.
2000-07-01
Deuteron Compton scattering below pion photoproduction threshold is considered in the framework of the nonrelativistic diagrammatic approach with the Bonn OBE potential. A complete gauge-invariant set of diagrams is taken into account which includes resonance diagrams without and with NN-rescattering and diagrams with one- and two-body seagulls. The seagull operators are analyzed in detail, and their relations with free- and bound-nucleon polarizabilities are discussed. It is found that both dipole and higher-order polarizabilities of the nucleon are required for a quantitative description of recent experimental data. An estimate of the isospin-averaged dipole electromagnetic polarizabilities of the nucleon and the polarizabilities of the neutron is obtained from the data.
Deuteron Compton scattering below pion photoproduction threshold
International Nuclear Information System (INIS)
Levchuk, M.I.; L'vov, A.I.
2000-01-01
Deuteron Compton scattering below pion photoproduction threshold is considered in the framework of the nonrelativistic diagrammatic approach with the Bonn OBE potential. A complete gauge-invariant set of diagrams is taken into account which includes resonance diagrams without and with NN-rescattering and diagrams with one- and two-body seagulls. The seagull operators are analyzed in detail, and their relations with free- and bound-nucleon polarizabilities are discussed. It is found that both dipole and higher-order polarizabilities of the nucleon are required for a quantitative description of recent experimental data. An estimate of the isospin-averaged dipole electromagnetic polarizabilities of the nucleon and the polarizabilities of the neutron is obtained from the data
Deuteron Compton scattering below pion photoproduction threshold
Energy Technology Data Exchange (ETDEWEB)
Levchuk, M.I. E-mail: levchuk@dragon.bas-net.by; L' vov, A.I. E-mail: lvov@x4u.lebedev.ru
2000-07-17
Deuteron Compton scattering below pion photoproduction threshold is considered in the framework of the nonrelativistic diagrammatic approach with the Bonn OBE potential. A complete gauge-invariant set of diagrams is taken into account which includes resonance diagrams without and with NN-rescattering and diagrams with one- and two-body seagulls. The seagull operators are analyzed in detail, and their relations with free- and bound-nucleon polarizabilities are discussed. It is found that both dipole and higher-order polarizabilities of the nucleon are required for a quantitative description of recent experimental data. An estimate of the isospin-averaged dipole electromagnetic polarizabilities of the nucleon and the polarizabilities of the neutron is obtained from the data.
Exclusive compton scattering on the proton
International Nuclear Information System (INIS)
Chen, J.P.; Chudakov, E.; DeJager, C.; Degtyarenko, P.; Ent, R.; Gomez, J.; Hansen, O.; Keppel, C.; Klein, F.; Kuss, M.
1999-01-01
An experiment is proposed to measure the cross sections for Real Compton Scattering from the proton in the energy range 3-6 GeV and over a wide angular range, and to measure the longitudinal and transverse components of the polarization transfer to the recoil proton at a single kinematic point. Together, these measurements will test models of the reaction mechanism and determine new structure functions of the proton that are related to the same non-forward parton densities that determine the elastic electron scattering form factors and the parton densities. The experiment utilizes an untagged Bremsstrahlung photon beam and the standard Hall A cryogenic targets. The scattered photon is detected in a photon spectrometer, currently under construction. The coincident recoil proton is detected in one of the Hall A magnetic spectrometers and its polarization components are measured in the existing Focal Plane Polarimeter. This proposal extends and supersedes E97 - 108 which was approved by PAC13. (author)
Exclusive compton scattering on the proton
Energy Technology Data Exchange (ETDEWEB)
Chen, J.P.; Chudakov, E.; DeJager, C.; Degtyarenko, P.; Ent, R.; Gomez, J.; Hansen, O.; Keppel, C.; Klein, F.; Kuss, M. [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States)] [and others
1999-07-01
An experiment is proposed to measure the cross sections for Real Compton Scattering from the proton in the energy range 3-6 GeV and over a wide angular range, and to measure the longitudinal and transverse components of the polarization transfer to the recoil proton at a single kinematic point. Together, these measurements will test models of the reaction mechanism and determine new structure functions of the proton that are related to the same non-forward parton densities that determine the elastic electron scattering form factors and the parton densities. The experiment utilizes an untagged Bremsstrahlung photon beam and the standard Hall A cryogenic targets. The scattered photon is detected in a photon spectrometer, currently under construction. The coincident recoil proton is detected in one of the Hall A magnetic spectrometers and its polarization components are measured in the existing Focal Plane Polarimeter. This proposal extends and supersedes E97 - 108 which was approved by PAC13. (author)
Exclusive Compton Scattering on the Proton
International Nuclear Information System (INIS)
Chen, J. P.; Chudakov, E.; DeJager, C.; Degtyarenko, P.; Ent, R.; Gomez, J.; Hansen, O.; Keppel, C.; Klein, F.; Kuss, M.; LeRose, J.; Liang, M.; Michaels, R.; Mitchell, J.; Liyanage, N.; Rutt, P.; Saha, A.; Wojtsekhowski, B.; Bouwhuis, M.; Chang, T.H.; Holt, R. J.; Nathan, A. M.; Roedelbronn, M.; Wijesooriya, K.; Williamson, S. E.; Dodge, G.; Hyde-Wright, C.; Radyushkin, A.; Sabatie, F.; Weinstein, L. B.; Ulmer, P.; Bosted, P.; Finn, J. M.; Jones, M.; Churchwell, S.; Howell, C.; Gilman, R.; Glashausser, C.; Jiang, X.; Ransome, R.; Strauch, S.; Berthot, J.; Bertin, P.; Fonvielle, H.; Roblin, Y.; Bertozzi, W.; Gilad, S.; Rowntree, D.; Zu, Z.; Brown, D.; Chang, G.; Afanasev, A.; Egiyan, K.; Hoohauneysan, E.; Ketikyan, A.; Mailyan, S.; Petrosyan, A.; Shahinyan, A.; Voskanyan, H.; Boeglin, W.; Markowitz, P.; Hines, J.; Strobel, G.; Templon, J.; Feldman, G.; Morris, C. L.; Gladyshev, V.; Lindgren, R. A.; Calarco, J.; Hersman, W.; Leuschner, M.; Gasparian, A.
1999-01-01
An experiment is proposed to measure the cross sections for Real Compton Scattering from the proton in the energy range 3-6 GeV and over a wide angular range; and to measure the longitudinal and transverse components of the polarization transfer to the recoil proton at a single kinematic point. Together; these measurements will test models of the reaction mechanism and determine new structure functions of the proton that are related to the same nonforward parton densities that determine the elastic electron scattering form factors and the parton densities. The experiment utilizes an untagged bremsstrahlung photon beam and the standard Hall A cryogenic targets. The scattered photon is detected in a photon spectrometer; currently under construction. The coincident recoil proton is detected in one of the Hall A magnetic spectrometers and its polarization components are measured in the existing Focal Plane Polarimeter. This proposal extends and supercedes E97-108 which was approved by PAC13
Chouet, Bernard A.; Dawson, Phillip B.; Arciniega-Ceballos, Alejandra
2005-01-01
The source mechanism of very long period (VLP) signals accompanying volcanic degassing bursts at Popocatépetl is analyzed in the 15–70 s band by minimizing the residual error between data and synthetics calculated for a point source embedded in a homogeneous medium. The waveforms of two eruptions (23 April and 23 May 2000) representative of mild Vulcanian activity are well reproduced by our inversion, which takes into account volcano topography. The source centroid is positioned 1500 m below the western perimeter of the summit crater, and the modeled source is composed of a shallow dipping crack (sill with easterly dip of 10°) intersecting a steeply dipping crack (northeast striking dike dipping 83° northwest), whose surface extension bisects the vent. Both cracks undergo a similar sequence of inflation, deflation, and reinflation, reflecting a cycle of pressurization, depressurization, and repressurization within a time interval of 3–5 min. The largest moment release occurs in the sill, showing a maximum volume change of 500–1000 m3, pressure drop of 3–5 MPa, and amplitude of recovered pressure equal to 1.2 times the amplitude of the pressure drop. In contrast, the maximum volume change in the dike is less (200–300 m3), with a corresponding pressure drop of 1–2 MPa and pressure recovery equal to the pressure drop. Accompanying these volumetric sources are single-force components with magnitudes of 108 N, consistent with melt advection in response to pressure transients. The source time histories of the volumetric components of the source indicate that significant mass movement starts within the sill and triggers a mass movement response in the dike within a few seconds. Such source behavior is consistent with the opening of a pathway for escape of pent-up gases from slow pressurization of the sill driven by magma crystallization. The opening of this pathway and associated rapid evacuation of volcanic gases induces the pressure drop. Pressure
Energy Technology Data Exchange (ETDEWEB)
Martini, Elaine
1996-12-31
The basic operation, design and construction of the plastic scintillator detector is described. In order to increase the sensitivity of this detector, two blocks of plastic scintillator have been assembled to act as a anticompton system. The detectors were produced by polymerisation of styrene monomer with PPO (2,5 diphenyl-oxazole) and POPOP (1,4 bis (-5 phenyl-2- oxazoly)benzene) in proportions of 0.5 and 0.05 respectively. The transparency of this detector was evaluated by excitation of the {sup 241} Am source located directly in the back surface plastic coupled to a photomultiplier. The light attenuation according to the detector thickness has fitted to a two-exponential function: relative height pulse = 0,519 e{sup -0.0016} + 0.481 e{sup -0.02112.x}. Four radioactive sources{sup {sup 2}2} Na, {sup 54} Mn, {sup 137} Cs and {sup 131} I were used to evaluate the performance of this system. The Compton reduction factor, determined by the ratio of the energy peak values of suppressed and unsuppressed spectra was 1.16. The Compton suppression factor determined by the ratio of the net photopeak area to the area of an equal spectra width in the Compton continuum, was approximately 1.208 {+-} 0.109. The sensitivity of the system, defined as the least amount of a radioactivity that can be quantified in the photopeak region, was 9.44 cps. First, the detector was assembled to be applied in biological studies of whole body counter measurements of small animals. Using a phantom, (small animal simulator) and a punctual {sup 137} Cs source, located in the central region of the well counter the geometrical efficiency detector was about 5%. (author) 40 refs., 28 fifs., 2 tabs.
Energy Technology Data Exchange (ETDEWEB)
Martini, Elaine
1995-12-31
The basic operation, design and construction of the plastic scintillator detector is described. In order to increase the sensitivity of this detector, two blocks of plastic scintillator have been assembled to act as a anticompton system. The detectors were produced by polymerisation of styrene monomer with PPO (2,5 diphenyl-oxazole) and POPOP (1,4 bis (-5 phenyl-2- oxazoly)benzene) in proportions of 0.5 and 0.05 respectively. The transparency of this detector was evaluated by excitation of the {sup 241} Am source located directly in the back surface plastic coupled to a photomultiplier. The light attenuation according to the detector thickness has fitted to a two-exponential function: relative height pulse = 0,519 e{sup -0.0016} + 0.481 e{sup -0.02112.x}. Four radioactive sources{sup {sup 2}2} Na, {sup 54} Mn, {sup 137} Cs and {sup 131} I were used to evaluate the performance of this system. The Compton reduction factor, determined by the ratio of the energy peak values of suppressed and unsuppressed spectra was 1.16. The Compton suppression factor determined by the ratio of the net photopeak area to the area of an equal spectra width in the Compton continuum, was approximately 1.208 {+-} 0.109. The sensitivity of the system, defined as the least amount of a radioactivity that can be quantified in the photopeak region, was 9.44 cps. First, the detector was assembled to be applied in biological studies of whole body counter measurements of small animals. Using a phantom, (small animal simulator) and a punctual {sup 137} Cs source, located in the central region of the well counter the geometrical efficiency detector was about 5%. (author) 40 refs., 28 fifs., 2 tabs.
A filtered backprojection reconstruction algorithm for Compton camera
Energy Technology Data Exchange (ETDEWEB)
Lojacono, Xavier; Maxim, Voichita; Peyrin, Francoise; Prost, Remy [Lyon Univ., Villeurbanne (France). CNRS, Inserm, INSA-Lyon, CREATIS, UMR5220; Zoglauer, Andreas [California Univ., Berkeley, CA (United States). Space Sciences Lab.
2011-07-01
In this paper we present a filtered backprojection reconstruction algorithm for Compton Camera detectors of particles. Compared to iterative methods, widely used for the reconstruction of images from Compton camera data, analytical methods are fast, easy to implement and avoid convergence issues. The method we propose is exact for an idealized Compton camera composed of two parallel plates of infinite dimension. We show that it copes well with low number of detected photons simulated from a realistic device. Images reconstructed from both synthetic data and realistic ones obtained with Monte Carlo simulations demonstrate the efficiency of the algorithm. (orig.)
High-pressure system for Compton scattering experiments
International Nuclear Information System (INIS)
Oomi, G.; Honda, F.; Kagayama, T.; Itoh, F.; Sakurai, H.; Kawata, H.; Shimomura, O.
1998-01-01
High-pressure apparatus for Compton scattering experiments has been developed to study the momentum distribution of conduction electrons in metals and alloys at high pressure. This apparatus was applied to observe the Compton profile of metallic Li under pressure. It was found that the Compton profile at high pressure could be obtained within several hours by using this apparatus and synchrotron radiation. The result on the pressure dependence of the Fermi momentum of Li obtained here is in good agreement with that predicted from the free-electron model
Beam Diagnostics for Laser Undulator Based on Compton Backward Scattering
Kuroda, R
2005-01-01
A compact soft X-ray source is required in various research fields such as material and biological science. The laser undulator based on Compton backward scattering has been developed as a compact soft X-ray source for the biological observation at Waseda University. It is performed in a water window region (250eV - 500 eV) using the interaction between 1047 nm Nd:YLF laser (10ps FWHM) and about 5 MeV high quality electron beam (10ps FWHM) generated from rf gun system. The range of X-ray energy in the water window region has K-shell absorption edges of Oxygen, Carbon and Nitrogen, which mainly constitute of living body. Since the absorption coefficient of water is much smaller than the protein's coefficient in this range, a dehydration of the specimens is not necessary. To generate the soft X-ray pulse stably, the electron beam diagnostics have been developed such as the emittance measurement using double slit scan technique, the bunch length measurement using two frequency analysis technique. In this confere...
Yeh, Pen-Shu (Inventor)
1998-01-01
A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.
Directory of Open Access Journals (Sweden)
O. Tichý
2016-11-01
Full Text Available Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values as a product of the source-receptor sensitivity (SRS matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas
2016-11-01
Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.
Compact X-ray source at STF (Super Conducting Accelerator Test Facility)
International Nuclear Information System (INIS)
Urakawa, J
2012-01-01
KEK-STF is a super conducting linear accelerator test facility for developing accelerator technologies for the ILC (International Linear Collider). We are supported in developing advanced accelerator technologies using STF by Japanese Ministry (MEXT) for Compact high brightness X-ray source development. Since we are required to demonstrate the generation of high brightness X-ray based on inverse Compton scattering using super conducting linear accelerator and laser storage cavity technologies by October of next year (2012), the design has been fixed and the installation of accelerator components is under way. The necessary technology developments and the planned experiment are explained.
Effects of thermal plasma on self-absorbed synchrotron sources in active galactic nuclei
International Nuclear Information System (INIS)
De Kool, M.; Begelman, M.C.
1989-01-01
The observable effects of a thermal background plasma in a self-absorbed synchrotron source are reviewed, in the context of a model for the central engine of an active galactic nucleus (AGN). Considering the effects of free-free absorption and emission, Thomson and Compton scattering, and spatial stratification, it is found that the observations set an upper limit on the thermal electron scattering optical depth in the central synchrotron-emitting region of an AGN. The upper limit, tau(max) about 1, results mainly from the apparent absence of induced Compton scattering and inverse thermal Comptonization effects. The low value of tau(max) poses some problems for nonthermal models of the AGN continuum that can be partly resolved by assuming a thin disk or layer-like geometry for the source, with (h/R) less than about 0.01. A likely site for the synchrotron-producing region seems to be the surface of an accretion disk or torus. 20 refs
Compton scatter imaging: A tool for historical exploration
International Nuclear Information System (INIS)
Harding, G.; Harding, E.
2010-01-01
This review discusses the principles and technological realisation of a technique, termed Compton scatter imaging (CSI), which is based on spatially resolved detection of Compton scattered X-rays. The applicational focus of this review is to objects of historical interest. Following a historical survey of CSI, a description is given of the major characteristics of Compton X-ray scatter. In particular back-scattered X-rays allow massive objects to be imaged, which would otherwise be too absorbing for the conventional transmission X-ray technique. The ComScan (an acronym for Compton scatter scanner) is a commercially available backscatter imaging system, which is discussed here in some detail. ComScan images from some artefacts of historical interest, namely a fresco, an Egyptian mummy and a mediaeval clasp are presented and their use in historical analysis is indicated. The utility of scientific and technical advance for not only exploring history, but also restoring it, is briefly discussed.
Applicability of compton imaging in nuclear decommissioning activities
International Nuclear Information System (INIS)
Ljubenov, V.Lj.; Marinkovic, P.M.
2002-01-01
During the decommissioning of nuclear facilities significant part of the activities is related to the radiological characterization, waste classification and management. For these purposes a relatively new imaging technique, based on information from the gamma radiation that undergoes Compton scattering, is applicable. Compton imaging systems have a number of advantages for nuclear waste characterization, such as identifying hot spots in mixed waste in order to reduce the volume of high-level waste requiring extensive treatment or long-term storage, imaging large contaminated areas and objects etc. Compton imaging also has potential applications for monitoring of production, transport and storage of nuclear materials and components. This paper discusses some system design requirements and performance specifications for these applications. The advantages of Compton imaging are compared to competing imaging techniques. (author)
Virtual compton scattering off protons at moderately large momentum transfer
Energy Technology Data Exchange (ETDEWEB)
Kroll, P; Schuermann, M [Wuppertal Univ. (Gesamthochschule) (Germany); Guichon, P A.M. [CEA Centre d` Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. d` Astrophysique, de la Physique des Particules, de la Physique Nucleaire et de l` Instrumentation Associee
1995-06-28
The amplitudes for virtual Compton scattering off protons are calculated within the framework of the diquark model in which protons are viewed as being built up by quarks and diquarks. The latter objects are treated as quasi-elementary constituents of the proton. Virtual Compton scattering, electroproduction of photons and the Bethe-Heitler contamination are discussed for various kinematical situations. We particularly emphasize the role of the electron asymmetry for measuring the relative phases between the virtual Compton and the Bethe-Heitler amplitudes. It is also shown that the model is able to describe very well the experimental data for real Compton scattering off protons. (authors). 35 refs., 8 figs., 1 tab.
Electronic structure of hafnium: A Compton profile study
Indian Academy of Sciences (India)
To extract the true Compton profile from the raw data, the raw data were cor- rected for ... For the present sample and experimental conditions, the contribution of .... are in better agreement with the simple renormalized free atom calculations for.
Virtual compton scattering off protons at moderately large momentum transfer
International Nuclear Information System (INIS)
Kroll, P.; Schuermann, M.; Guichon, P.A.M.
1995-01-01
The amplitudes for virtual Compton scattering off protons are calculated within the framework of the diquark model in which protons are viewed as being built up by quarks and diquarks. The latter objects are treated as quasi-elementary constituents of the proton. Virtual Compton scattering, electroproduction of photons and the Bethe-Heitler contamination are discussed for various kinematical situations. We particularly emphasize the role of the electron asymmetry for measuring the relative phases between the virtual Compton and the Bethe-Heitler amplitudes. It is also shown that the model is able to describe very well the experimental data for real Compton scattering off protons. (authors). 35 refs., 8 figs., 1 tab
Deconvolution of shift-variant broadening for Compton scatter imaging
International Nuclear Information System (INIS)
Evans, Brian L.; Martin, Jeffrey B.; Roggemann, Michael C.
1999-01-01
A technique is presented for deconvolving shift-variant Doppler broadening of singly Compton scattered gamma rays from their recorded energy distribution. Doppler broadening is important in Compton scatter imaging techniques employing gamma rays with energies below roughly 100 keV. The deconvolution unfolds an approximation to the angular distribution of scattered photons from their recorded energy distribution in the presence of statistical noise and background counts. Two unfolding methods are presented, one based on a least-squares algorithm and one based on a maximum likelihood algorithm. Angular distributions unfolded from measurements made on small scattering targets show less evidence of Compton broadening. This deconvolution is shown to improve the quality of filtered backprojection images in multiplexed Compton scatter tomography. Improved sharpness and contrast are evident in the images constructed from unfolded signals
Gamma-ray burst observations with the Compton/Ulysses/Pioneer-Venus network
International Nuclear Information System (INIS)
Cline, T.L.; Hurley, K.C.; Sommer, M.; Boer, M.; Niel, M.; Fishman, G.J.; Kouveliotou, C.; Meegan, C.A.; Paciesas, W.S.; Wilson, R.B.; Fenimore, E.E.; Laros, J.G.; Klebesadel, R.W.
1993-01-01
The third and latest interplanetary network for the precise directional analysis of gamma ray bursts consists of the Burst and Transient Source Experiment in Compton Gamma Ray Observatory and instruments on Pioneer-Venus Orbiter and the deep-space mission Ulysses. The unsurpassed resolution of the BATSE instrument, the use of refined analysis techniques, and Ulysses' distance of up to 6 AU all contribute to a potential for greater precision than had been achieved with former networks. Also, the departure of Ulysses from the ecliptic plane in 1992 avoids any positional alignment of the three instruments that would lessen the source directional accuracy
International Nuclear Information System (INIS)
Ahmed, Y. A.; Ewa, I.O.B.; Funtua, I.I.; Jonah, S.A.; Landsberger, S.
2007-01-01
Compton Suppression Factors (SF) and Compton Reduction Factors (RF) of the UT Austin's Compton suppression spectrometer being parameters characterizing the system performance were measured using ''1''3''7Cs and ''6''0Co point sources. The system performance was evaluated as a function of energy and geometry. The (P/C), A(P/C), (P/T), Cp, and Ce were obtained for each of the parameters. The natural background reduction factor in the anticoincidence mode and that of normal mode was calculated and its effect on the detection limit of biological samples evaluated. Applicability of the spectrometer and the method for biological samples was tested in the measurement of twenty-four elements (Ba, Sr, I, Br, Cu, V, Mg, Na, Cl, Mn, Ca, Sn, In, K, Mo, Cd, Zn, As, Sb, Ni, Rb, Cs, Fe, and Co) commonly found in food, milk, tea and tobacco items. They were determined from seven National Institute for Standard and Technology (NIST) certified reference materials (rice flour, oyster tissue, non-fat powdered milk, peach leaves, tomato leaves, apple leaves, and citrus leaves). Our results shows good agreement with the NIST certified values, indicating that the method developed in the present study is suitable for the determination of aforementioned elements in biological samples without undue interference problems
Energy Technology Data Exchange (ETDEWEB)
Hall, G. N., E-mail: hall98@llnl.gov; Izumi, N.; Tommasini, R.; Carpenter, A. C.; Palmer, N. E.; Zacharias, R.; Felker, B.; Holder, J. P.; Allen, F. V.; Bell, P. M.; Bradley, D.; Montesanti, R.; Landen, O. L. [Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, California 94550 (United States)
2014-11-15
Compton radiography is an important diagnostic for Inertial Confinement Fusion (ICF), as it provides a means to measure the density and asymmetries of the DT fuel in an ICF capsule near the time of peak compression. The AXIS instrument (ARC (Advanced Radiography Capability) X-ray Imaging System) is a gated detector in development for the National Ignition Facility (NIF), and will initially be capable of recording two Compton radiographs during a single NIF shot. The principal reason for the development of AXIS is the requirement for significantly improved detection quantum efficiency (DQE) at high x-ray energies. AXIS will be the detector for Compton radiography driven by the ARC laser, which will be used to produce Bremsstrahlung X-ray backlighter sources over the range of 50 keV–200 keV for this purpose. It is expected that AXIS will be capable of recording these high-energy x-rays with a DQE several times greater than other X-ray cameras at NIF, as well as providing a much larger field of view of the imploded capsule. AXIS will therefore provide an image with larger signal-to-noise that will allow the density and distribution of the compressed DT fuel to be measured with significantly greater accuracy as ICF experiments are tuned for ignition.
Hall, G N; Izumi, N; Tommasini, R; Carpenter, A C; Palmer, N E; Zacharias, R; Felker, B; Holder, J P; Allen, F V; Bell, P M; Bradley, D; Montesanti, R; Landen, O L
2014-11-01
Compton radiography is an important diagnostic for Inertial Confinement Fusion (ICF), as it provides a means to measure the density and asymmetries of the DT fuel in an ICF capsule near the time of peak compression. The AXIS instrument (ARC (Advanced Radiography Capability) X-ray Imaging System) is a gated detector in development for the National Ignition Facility (NIF), and will initially be capable of recording two Compton radiographs during a single NIF shot. The principal reason for the development of AXIS is the requirement for significantly improved detection quantum efficiency (DQE) at high x-ray energies. AXIS will be the detector for Compton radiography driven by the ARC laser, which will be used to produce Bremsstrahlung X-ray backlighter sources over the range of 50 keV-200 keV for this purpose. It is expected that AXIS will be capable of recording these high-energy x-rays with a DQE several times greater than other X-ray cameras at NIF, as well as providing a much larger field of view of the imploded capsule. AXIS will therefore provide an image with larger signal-to-noise that will allow the density and distribution of the compressed DT fuel to be measured with significantly greater accuracy as ICF experiments are tuned for ignition.
Narayan, Ramesh; Zhu, Yucong; Psaltis, Dimitrios; Saḑowski, Aleksander
2016-03-01
We describe Hybrid Evaluator for Radiative Objects Including Comptonization (HEROIC), an upgraded version of the relativistic radiative post-processor code HERO described in a previous paper, but which now Includes Comptonization. HEROIC models Comptonization via the Kompaneets equation, using a quadratic approximation for the source function in a short characteristics radiation solver. It employs a simple form of accelerated lambda iteration to handle regions of high scattering opacity. In addition to solving for the radiation field, HEROIC also solves for the gas temperature by applying the condition of radiative equilibrium. We present benchmarks and tests of the Comptonization module in HEROIC with simple 1D and 3D scattering problems. We also test the ability of the code to handle various relativistic effects using model atmospheres and accretion flows in a black hole space-time. We present two applications of HEROIC to general relativistic magnetohydrodynamics simulations of accretion discs. One application is to a thin accretion disc around a black hole. We find that the gas below the photosphere in the multidimensional HEROIC solution is nearly isothermal, quite different from previous solutions based on 1D plane parallel atmospheres. The second application is to a geometrically thick radiation-dominated accretion disc accreting at 11 times the Eddington rate. Here, the multidimensional HEROIC solution shows that, for observers who are on axis and look down the polar funnel, the isotropic equivalent luminosity could be more than 10 times the Eddington limit, even though the spectrum might still look thermal and show no signs of relativistic beaming.
Bin mode estimation methods for Compton camera imaging
International Nuclear Information System (INIS)
Ikeda, S.; Odaka, H.; Uemura, M.; Takahashi, T.; Watanabe, S.; Takeda, S.
2014-01-01
We study the image reconstruction problem of a Compton camera which consists of semiconductor detectors. The image reconstruction is formulated as a statistical estimation problem. We employ a bin-mode estimation (BME) and extend an existing framework to a Compton camera with multiple scatterers and absorbers. Two estimation algorithms are proposed: an accelerated EM algorithm for the maximum likelihood estimation (MLE) and a modified EM algorithm for the maximum a posteriori (MAP) estimation. Numerical simulations demonstrate the potential of the proposed methods
Deeply virtual Compton scattering off "4He
International Nuclear Information System (INIS)
Hattawy, M.
2015-01-01
The "4He nucleus is of particular interest to study nuclear GPDs (Generalized Parton Distributions) as its partonic structure is described by only one chirally-even GPD. It is also a simple few-body system and has a high density that makes it the ideal target to investigate nuclear effects on partons. The experiment described in this thesis is JLab-E08-24, which was carried out in 2009 by the CLAS collaboration during the 'EG6' run. In this experiment, a 6 GeV longitudinally-polarized electron beam was scattered onto a 6 atm "4He gaseous target. During this experiment, in addition to the CLAS detector, a Radial Time Projection Chamber (RTPC), to detect low-energy nuclear recoils, and an Inner Calorimeter (IC), to improve the detection of photons at very forward angles, were used. We carried out a full analysis on our 6 GeV dataset, showing the feasibility of measuring exclusive nuclear Deeply Virtual Compton Scattering (DVCS) reactions. The analysis included: the identification of the final-state particles, the DVCS event selection, the π"0 background subtraction. The beam-spin asymmetry was then extracted for both DVCS channels and compared to the ones of the free-proton DVCS reaction, and to theoretical predictions from two models. Finally, the real and the imaginary parts of the "4He CFF (Compton Form Factor) HA have been extracted. Different levels of agreement were found between our measurements and the theoretical calculations. This thesis is organized as follows: In chapter 1, the available theoretical tools to study hadronic structure are presented, with an emphasis on the nuclear effects and GPDs. In chapter 2, the characteristics of the CLAS spectrometer are reviewed. In chapter 3, the working principle and the calibration aspects of the RTPC are discussed. In chapter 4, the identification of the final-state particles and the Monte-Carlo simulation are presented. In chapter 5, the selection of the DVCS events, the background subtraction, and uncertainty
A flexible geometry Compton camera for industrial gamma ray imaging
International Nuclear Information System (INIS)
Royle, G.J.; Speller, R.D.
1996-01-01
A design for a Compton scatter camera is proposed which is applicable to gamma ray imaging within limited access industrial sites. The camera consists of a number of single element detectors arranged in a small cluster. Coincidence circuitry enables the detectors to act as a scatter camera. Positioning the detector cluster at various locations within the site, and subsequent reconstruction of the recorded data, allows an image to be obtained. The camera design allows flexibility to cater for limited space or access simply by positioning the detectors in the optimum geometric arrangement within the space allowed. The quality of the image will be limited but imaging could still be achieved in regions which are otherwise inaccessible. Computer simulation algorithms have been written to optimize the various parameters involved, such as geometrical arrangement of the detector cluster and the positioning of the cluster within the site, and to estimate the performance of such a device. Both scintillator and semiconductor detectors have been studied. A prototype camera has been constructed which operates three small single element detectors in coincidence. It has been tested in a laboratory simulation of an industrial site. This consisted of a small room (2 m wide x 1 m deep x 2 m high) into which the only access points were two 6 cm diameter holes in a side wall. Simple images of Cs-137 sources have been produced. The work described has been done on behalf of BNFL for applications at their Sellafield reprocessing plant in the UK
DEFF Research Database (Denmark)
Gandhi, P.; Annuar, A.; Lansbury, G. B.
2017-01-01
We present NuSTAR X-ray observations of the active galactic nucleus (AGN) in NGC 7674. The source shows a flat X-ray spectrum, suggesting that it is obscured by Compton-thick gas columns. Based upon long-term flux dimming, previous work suggested the alternate possibility that the source is a rec...
Louro, Vinicius Hector Abud; Mantovani, Marta Silvia Maria
2012-05-01
The Alto do Paranaíba Igneous Province (APIP) is known for its great mineral exploratory interest in phosphates, niobium, titanium, and diamonds, among others. In the years of 2005 and 2006, the Economic Development Company of Minas Gerais (CODEMIG — http://www.comig.com.br/) performed an airborne magnetic survey over the portion of this igneous province which belongs to Minas Gerais state, denominated Area 7. This survey revealed at the coordinates (19°45'S, 46°10'W) a tripolar anomaly here referred as Pratinha I. This anomaly does not present evidences of outcropping or topographic remodeling. So, boreholes or studies over its sources make the geophysical methods the best and less expensive solution for studying the body in its subsurface. Besides, two gravimetric ground surveys were performed in 2009 and 2010, confirming the existence of a density contrast over the region of the magnetic anomaly. Therefore, through the magnetometry and gravimetry processing, 3D modeling and inversions, it was possible to estimate the geometry, density and magnetic susceptibility, which when analyzed with the regional geology, enabled the proposition of an igneous intrusion of probable alkaline or kamafugitic composition to justify the gravimetric and magnetic response in the region.
Compton Composites Late in the Early Universe
Directory of Open Access Journals (Sweden)
Frederick Mayer
2014-07-01
Full Text Available Beginning roughly two hundred years after the big-bang, a tresino phase transition generated Compton-scale composite particles and converted most of the ordinary plasma baryons into new forms of dark matter. Our model consists of ordinary electrons and protons that have been bound into mostly undetectable forms. This picture provides an explanation of the composition and history of ordinary to dark matter conversion starting with, and maintaining, a critical density Universe. The tresino phase transition started the conversion of ordinary matter plasma into tresino-proton pairs prior to the the recombination era. We derive the appropriate Saha–Boltzmann equilibrium to determine the plasma composition throughout the phase transition and later. The baryon population is shown to be quickly modified from ordinary matter plasma prior to the transition to a small amount of ordinary matter and a much larger amount of dark matter after the transition. We describe the tresino phase transition and the origin, quantity and evolution of the dark matter as it takes place from late in the early Universe until the present.
Scaling limit of deeply virtual Compton scattering
Energy Technology Data Exchange (ETDEWEB)
A. Radyushkin
2000-07-01
The author outlines a perturbative QCD approach to the analysis of the deeply virtual Compton scattering process {gamma}{sup *}p {r_arrow} {gamma}p{prime} in the limit of vanishing momentum transfer t=(p{prime}{minus}p){sup 2}. The DVCS amplitude in this limit exhibits a scaling behavior described by a two-argument distributions F(x,y) which specify the fractions of the initial momentum p and the momentum transfer r {equivalent_to} p{prime}{minus}p carried by the constituents of the nucleon. The kernel R(x,y;{xi},{eta}) governing the evolution of the non-forward distributions F(x,y) has a remarkable property: it produces the GLAPD evolution kernel P(x/{xi}) when integrated over y and reduces to the Brodsky-Lepage evolution kernel V(y,{eta}) after the x-integration. This property is used to construct the solution of the one-loop evolution equation for the flavor non-singlet part of the non-forward quark distribution.
The first demonstration of the concept of “narrow-FOV Si/CdTe semiconductor Compton camera”
Energy Technology Data Exchange (ETDEWEB)
Ichinohe, Yuto, E-mail: ichinohe@astro.isas.jaxa.jp [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo, Sagamihara, Kanagawa 252-5210 (Japan); University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033 (Japan); Uchida, Yuusuke; Watanabe, Shin [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo, Sagamihara, Kanagawa 252-5210 (Japan); University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033 (Japan); Edahiro, Ikumi [Hiroshima University, 1-3-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Hayashi, Katsuhiro [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo, Sagamihara, Kanagawa 252-5210 (Japan); Kawano, Takafumi; Ohno, Masanori [Hiroshima University, 1-3-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Ohta, Masayuki [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo, Sagamihara, Kanagawa 252-5210 (Japan); Takeda, Shin' ichiro [Okinawa Institute of Science and Technology Graduate University, 1919-1 Tancha, Onna-son, Okinawa 904-0495 (Japan); Fukazawa, Yasushi [Hiroshima University, 1-3-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Katsuragawa, Miho [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo, Sagamihara, Kanagawa 252-5210 (Japan); University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033 (Japan); Nakazawa, Kazuhiro [University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033 (Japan); Odaka, Hirokazu [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo, Sagamihara, Kanagawa 252-5210 (Japan); Tajima, Hiroyasu [Solar-Terrestrial Environment Laboratory, Nagoya University, Furo-cho, Chikusa, Nagoya, Aichi 464-8601 (Japan); Takahashi, Hiromitsu [Hiroshima University, 1-3-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8526 (Japan); and others
2016-01-11
The Soft Gamma-ray Detector (SGD), to be deployed on board the ASTRO-H satellite, has been developed to provide the highest sensitivity observations of celestial sources in the energy band of 60–600 keV by employing a detector concept which uses a Compton camera whose field-of-view is restricted by a BGO shield to a few degree (narrow-FOV Compton camera). In this concept, the background from outside the FOV can be heavily suppressed by constraining the incident direction of the gamma ray reconstructed by the Compton camera to be consistent with the narrow FOV. We, for the first time, demonstrate the validity of the concept using background data taken during the thermal vacuum test and the low-temperature environment test of the flight model of SGD on ground. We show that the measured background level is suppressed to less than 10% by combining the event rejection using the anti-coincidence trigger of the active BGO shield and by using Compton event reconstruction techniques. More than 75% of the signals from the field-of-view are retained against the background rejection, which clearly demonstrates the improvement of signal-to-noise ratio. The estimated effective area of 22.8 cm{sup 2} meets the mission requirement even though not all of the operational parameters of the instrument have been fully optimized yet.
Virtual compton scattering at low energy; Diffusion compton virtuelle a basse energie
Energy Technology Data Exchange (ETDEWEB)
Lhuillier, D
1997-09-01
The work described in this PhD is a study of the Virtual Compton scattering (VCS) off the proton at low energy, below pion production threshold. Our experiment has been carried out at MAMI in the collaboration with the help of two high resolution spectrometers. Experimentally, the VCS process is the electroproduction of photons off a liquid hydrogen target. First results of data analysis including radiative corrections are presented and compared with low energy theorem prediction. VCS is an extension of the Real Compton Scattering. The virtuality of the incoming photon allows us to access new observables of the nucleon internal structure which are complementarity to the elastic form factors: the generalized polarizabilities (GP). They are function of the squared invariant mass of the virtual photo. The mass limit of these observables restore the usual electric and magnetic polarizabilities. Our experiment is the first measurement of the VCS process at a virtual photon mass equals 0.33 Ge V square. The experimental development presents the analysis method. The high precision needed in the absolute cross-section measurement required an accurate estimate of radiative corrections to the VCS. This new calculation, which has been performed in the dimensional regulation scheme, composes the theoretical part of this thesis. At low q', preliminary results agree with low energy theorem prediction. At higher q', substraction of low energy theorem contribution to extract GP is discussed. (author)
Virtual compton scattering at low energy; Diffusion compton virtuelle a basse energie
Energy Technology Data Exchange (ETDEWEB)
Lhuillier, D
1997-09-01
The work described in this PhD is a study of the Virtual Compton scattering (VCS) off the proton at low energy, below pion production threshold. Our experiment has been carried out at MAMI in the collaboration with the help of two high resolution spectrometers. Experimentally, the VCS process is the electroproduction of photons off a liquid hydrogen target. First results of data analysis including radiative corrections are presented and compared with low energy theorem prediction. VCS is an extension of the Real Compton Scattering. The virtuality of the incoming photon allows us to access new observables of the nucleon internal structure which are complementarity to the elastic form factors: the generalized polarizabilities (GP). They are function of the squared invariant mass of the virtual photo. The mass limit of these observables restore the usual electric and magnetic polarizabilities. Our experiment is the first measurement of the VCS process at a virtual photon mass equals 0.33 Ge V square. The experimental development presents the analysis method. The high precision needed in the absolute cross-section measurement required an accurate estimate of radiative corrections to the VCS. This new calculation, which has been performed in the dimensional regulation scheme, composes the theoretical part of this thesis. At low q', preliminary results agree with low energy theorem prediction. At higher q', substraction of low energy theorem contribution to extract GP is discussed. (author)
Compton spectra of atoms at high x-ray intensity
Son, Sang-Kil; Geffert, Otfried; Santra, Robin
2017-03-01
Compton scattering is the nonresonant inelastic scattering of an x-ray photon by an electron and has been used to probe the electron momentum distribution in gas-phase and condensed-matter samples. In the low x-ray intensity regime, Compton scattering from atoms dominantly comes from bound electrons in neutral atoms, neglecting contributions from bound electrons in ions and free (ionized) electrons. In contrast, in the high x-ray intensity regime, the sample experiences severe ionization via x-ray multiphoton multiple ionization dynamics. Thus, it becomes necessary to take into account all the contributions to the Compton scattering signal when atoms are exposed to high-intensity x-ray pulses provided by x-ray free-electron lasers (XFELs). In this paper, we investigate the Compton spectra of atoms at high x-ray intensity, using an extension of the integrated x-ray atomic physics toolkit, xatom. As the x-ray fluence increases, there is a significant contribution from ionized electrons to the Compton spectra, which gives rise to strong deviations from the Compton spectra of neutral atoms. The present study provides not only understanding of the fundamental XFEL-matter interaction but also crucial information for single-particle imaging experiments, where Compton scattering is no longer negligible. , which features invited work from the best early-career researchers working within the scope of J. Phys. B. This project is part of the Journal of Physics series’ 50th anniversary celebrations in 2017. Sang-Kil Son was selected by the Editorial Board of J. Phys. B as an Emerging Leader.
Development of a Compton suppression whole body counting for small animals
International Nuclear Information System (INIS)
Martini, Elaine
1995-01-01
The basic operation, design and construction of the plastic scintillator detector is described. In order to increase the sensitivity of this detector, two blocks of plastic scintillator have been assembled to act as a anticompton system. The detectors were produced by polymerisation of styrene monomer with PPO (2,5 diphenyl-oxazole) and POPOP (1,4 bis (-5 phenyl-2- oxazoly)benzene) in proportions of 0.5 and 0.05 respectively. The transparency of this detector was evaluated by excitation of the 241 Am source located directly in the back surface plastic coupled to a photomultiplier. The light attenuation according to the detector thickness has fitted to a two-exponential function: relative height pulse = 0,519 e -0.0016 + 0.481 e -0.02112.x . Four radioactive sources 2 2 Na, 54 Mn, 137 Cs and 131 I were used to evaluate the performance of this system. The Compton reduction factor, determined by the ratio of the energy peak values of suppressed and unsuppressed spectra was 1.16. The Compton suppression factor determined by the ratio of the net photopeak area to the area of an equal spectra width in the Compton continuum, was approximately 1.208 ± 0.109. The sensitivity of the system, defined as the least amount of a radioactivity that can be quantified in the photopeak region, was 9.44 cps. First, the detector was assembled to be applied in biological studies of whole body counter measurements of small animals. Using a phantom, (small animal simulator) and a punctual 137 Cs source, located in the central region of the well counter the geometrical efficiency detector was about 5%. (author)
Study on the Spatial Resolution of Single and Multiple Coincidences Compton Camera
Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna
2012-10-01
In this paper we study the image resolution that can be obtained from the Multiple Coincidences Compton Camera (MCCC). The principle of MCCC is based on a simultaneous acquisition of several gamma-rays emitted in cascade from a single nucleus. Contrary to a standard Compton camera, MCCC can theoretically provide the exact location of a radioactive source (based only on the identification of the intersection point of three cones created by a single decay), without complicated tomographic reconstruction. However, practical implementation of the MCCC approach encounters several problems, such as low detection sensitivities result in very low probability of coincident triple gamma-ray detection, which is necessary for the source localization. It is also important to evaluate how the detection uncertainties (finite energy and spatial resolution) influence identification of the intersection of three cones, thus the resulting image quality. In this study we investigate how the spatial resolution of the reconstructed images using the triple-cone reconstruction (TCR) approach compares to images reconstructed from the same data using standard iterative method based on single-cone. Results show, that FWHM for the point source reconstructed with TCR was 20-30% higher than the one obtained from the standard iterative reconstruction based on expectation maximization (EM) algorithm and conventional single-cone Compton imaging. Finite energy and spatial resolutions of the MCCC detectors lead to errors in conical surfaces definitions (“thick” conical surfaces) which only amplify in image reconstruction when intersection of three cones is being sought. Our investigations show that, in spite of being conceptually appealing, the identification of triple cone intersection constitutes yet another restriction of the multiple coincidence approach which limits the image resolution that can be obtained with MCCC and TCR algorithm.
Non-Proportionality of Electron Response and Energy Resolution of Compton Electrons in Scintillators
Swiderski, L.; Marcinkowski, R.; Szawlowski, M.; Moszynski, M.; Czarnacki, W.; Syntfeld-Kazuch, A.; Szczesniak, T.; Pausch, G.; Plettner, C.; Roemer, K.
2012-02-01
Non-proportionality of light yield and energy resolution of Compton electrons in three scintillators (LaBr3:Ce, LYSO:Ce and CsI:Tl) were studied in a wide energy range from 10 keV up to 1 MeV. The experimental setup was comprised of a High Purity Germanium detector and tested scintillators coupled to a photomultiplier. Probing the non-proportionality and energy resolution curves at different energies was obtained by changing the position of various radioactive sources with respect to both detectors. The distance between both detectors and source was kept small to make use of Wide Angle Compton Coincidence (WACC) technique, which allowed us to scan large range of scattering angles simultaneously and obtain relatively high coincidence rate of 100 cps using weak sources of about 10 μCi activity. The results are compared with those obtained by direct irradiation of the tested scintillators with gamma-ray sources and fitting the full-energy peaks.
Directory of Open Access Journals (Sweden)
Joel Sereno
2010-01-01
Full Text Available Inverse kinematics is the process of converting a Cartesian point in space into a set of joint angles to more efficiently move the end effector of a robot to a desired orientation. This project investigates the inverse kinematics of a robotic hand with fingers under various scenarios. Assuming the parameters of a provided robot, a general equation for the end effector point was calculated and used to plot the region of space that it can reach. Further, the benefits obtained from the addition of a prismatic joint versus an extra variable angle joint were considered. The results confirmed that having more movable parts, such as prismatic points and changing angles, increases the effective reach of a robotic hand.
Compton scatter correction for planner scintigraphic imaging
Energy Technology Data Exchange (ETDEWEB)
Vaan Steelandt, E; Dobbeleir, A; Vanregemorter, J [Algemeen Ziekenhuis Middelheim, Antwerp (Belgium). Dept. of Nuclear Medicine and Radiotherapy
1995-12-01
A major problem in nuclear medicine is the image degradation due to Compton scatter in the patient. Photons emitted by the radioactive tracer scatter in collision with electrons of the surrounding tissue. Due to the resulting loss of energy and change in direction, the scattered photons induce an object dependant background on the images. This results in a degradation of the contrast of warm and cold lesions. Although theoretically interesting, most of the techniques proposed in literature like the use of symmetrical photopeaks can not be implemented on the commonly used gamma camera due to the energy/linearity/sensitivity corrections applied in the detector. A method for a single energy isotope based on existing methods with adjustments towards daily practice and clinical situations is proposed. It is assumed that the scatter image, recorded from photons collected within a scatter window adjacent to the photo peak, is a reasonable close approximation of the true scatter component of the image reconstructed from the photo peak window. A fraction `k` of the image using the scatter window is subtracted from the image recorded in the photo peak window to produce the compensated image. The principal matter of the method is the right value for the factor `k`, which is determined in a mathematical way and confirmed by experiments. To determine `k`, different kinds of scatter media are used and are positioned in different ways in order to simulate a clinical situation. For a secondary energy window from 100 to 124 keV below a photo peak window from 126 to 154 keV, a value of 0.7 is found. This value has been verified using both an antropomorph thyroid phantom and the Rollo contrast phantom.
Polarization observables in Virtual Compton Scattering
International Nuclear Information System (INIS)
Doria, Luca
2007-10-01
Virtual Compton Scattering (VCS) is an important reaction for understanding nucleon structure at low energies. By studying this process, the generalized polarizabilities of the nucleon can be measured. These observables are a generalization of the already known polarizabilities and will permit theoretical models to be challenged on a new level. More specifically, there exist six generalized polarizabilities and in order to disentangle them all, a double polarization experiment must be performed. Within this work, the VCS reaction p(e,e'p)γ was measured at MAMI using the A1 Collaboration three spectrometer setup with Q 2 =0.33 (GeV/c) 2 . Using the highly polarized MAMI beam and a recoil proton polarimeter, it was possible to measure both the VCS cross section and the double polarization observables. Already in 2000, the unpolarized VCS cross section was measured at MAMI. In this new experiment, we could confirm the old data and furthermore the double polarization observables were measured for the first time. The data were taken in five periods between 2005 and 2006. In this work, the data were analyzed to extract the cross section and the proton polarization. For the analysis, a maximum likelihood algorithm was developed together with the full simulation of all the analysis steps. The experiment is limited by the low statistics due mainly to the focal plane proton polarimeter efficiency. To overcome this problem, a new determination and parameterization of the carbon analyzing power was performed. The main result of the experiment is the extraction of a new combination of the generalized polarizabilities using the double polarization observables. (orig.)