WorldWideScience

Sample records for annihilation probability density

  1. Annihilation probability density and other applications of the Schwinger multichannel method to the positron and electron scattering; Densidade de probabilidade de aniquilacao e outras aplicacoes do metodo multicanal de Schwinger ao espalhamento de positrons e eletrons

    Energy Technology Data Exchange (ETDEWEB)

    Varella, Marcio Teixeira do Nascimento

    2001-12-15

    We have calculated annihilation probability densities (APD) for positron collisions against He atom and H{sub 2} molecule. It was found that direct annihilation prevails at low energies, while annihilation following virtual positronium (Ps) formation is the dominant mechanism at higher energies. In room-temperature collisions (10{sup -2} eV) the APD spread over a considerable extension, being quite similar to the electronic densities of the targets. The capture of the positron in an electronic Feshbach resonance strongly enhanced the annihilation rate in e{sup +}-H{sub 2} collisions. We also discuss strategies to improve the calculation of the annihilation parameter (Z{sub eff} ), after debugging the computational codes of the Schwinger Multichannel Method (SMC). Finally, we consider the inclusion of the Ps formation channel in the SMC and show that effective configurations (pseudo eigenstates of the Hamiltonian of the collision ) are able to significantly reduce the computational effort in positron scattering calculations. Cross sections for electron scattering by polyatomic molecules were obtained in three different approximations: static-exchange (SE); tatic-exchange-plus-polarization (SEP); and multichannel coupling. The calculations for polar targets were improved through the rotational resolution of scattering amplitudes in which the SMC was combined with the first Born approximation (FBA). In general, elastic cross sections (SE and SEP approximations) showed good agreement with available experimental data for several targets. Multichannel calculations for e{sup -} -H{sub 2}O scattering, on the other hand, presented spurious structures at the electronic excitation thresholds (author)

  2. Probability densities and Lévy densities

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler

    For positive Lévy processes (i.e. subordinators) formulae are derived that express the probability density or the distribution function in terms of power series in time t. The applicability of the results to finance and to turbulence is briefly indicated.......For positive Lévy processes (i.e. subordinators) formulae are derived that express the probability density or the distribution function in terms of power series in time t. The applicability of the results to finance and to turbulence is briefly indicated....

  3. Joint Probabilities of Photon polarization Correlations in $e^{+}e^{-}$ Annihilation

    CERN Document Server

    Manoukian, E B

    2003-01-01

    Joint probability distributions of photon polarization correlations are computed, as well as those corresponding to the cases when only one of the photon's polarization is measured in $e^{+}e^{-}$ annihilation, in flight, in QED. This provides a dynamical, rather than a kinematical, description of photon polarization correlations as stemming from the ever precise and realistic QED theory. Such computations may be relevant to recent and future experiments involved in testing Bell-like inequalities as described.

  4. Modulation Based on Probability Density Functions

    Science.gov (United States)

    Williams, Glenn L.

    2009-01-01

    A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.

  5. Comparison of density estimators. [Estimation of probability density functions

    Energy Technology Data Exchange (ETDEWEB)

    Kao, S.; Monahan, J.F.

    1977-09-01

    Recent work in the field of probability density estimation has included the introduction of some new methods, such as the polynomial and spline methods and the nearest neighbor method, and the study of asymptotic properties in depth. This earlier work is summarized here. In addition, the computational complexity of the various algorithms is analyzed, as are some simulations. The object is to compare the performance of the various methods in small samples and their sensitivity to change in their parameters, and to attempt to discover at what point a sample is so small that density estimation can no longer be worthwhile. (RWR)

  6. SUSY-QCD corrections to (co)annihilation and their impact on the relic density

    Energy Technology Data Exchange (ETDEWEB)

    Harz, Julia [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Herrmann, Bjoern [Laboratoire d' Annecy de Physique Theorique, Annecy-le-Vieux (France); Klasen, Michael [Institute for Theoretical Physics, University of Muenster (Germany); Kovarik, Karol [Karlsruher Institut fuer Technologie (KIT), Karlsruhe (Germany); Le Boulc' h, Quentin [Laboratoire de Physique Subatomique et de Cosmologie, Grenoble (France)

    2013-07-01

    We computed the full O(α{sub s}) supersymmetric QCD corrections for neutralino-stop co-annihilation in the Minimal Supersymmetric Standard Model (MSSM). It is shown that these annihilation channels are phenomenologically relevant within the so-called phenomenological MSSM, in particular in the light of the observation of a Higgs-like particle with a mass of about 126 GeV at the LHC. Numerical results for the co-annihilation cross sections and the predicted neutralino relic density are presented. It will be demonstrated that the impact of including these corrections on the cosmologically preferred region of parameter space is larger than the current experimental uncertainty from WMAP data.

  7. Relic density and CMB constraints on dark matter annihilation with Sommerfeld enhancement

    Science.gov (United States)

    Zavala, Jesús; Vogelsberger, Mark; White, Simon D. M.

    2010-04-01

    We calculate how the relic density of dark matter particles is altered when their annihilation is enhanced by the Sommerfeld mechanism due to a Yukawa interaction between the annihilating particles. Maintaining a dark matter abundance consistent with current observational bounds requires the normalization of the s-wave annihilation cross section to be decreased compared to a model without enhancement. The level of suppression depends on the specific parameters of the particle model, with the kinetic decoupling temperature having the most effect. We find that the cross section can be reduced by as much as an order of magnitude for extreme cases. We also compute the μ-type distortion of the CMB energy spectrum caused by energy injection from such Sommerfeld-enhanced annihilation. Our results indicate that in the vicinity of resonances, associated with bound states, distortions can be large enough to be excluded by the upper limit |μ|≤9.0×10-5 found by the FIRAS (Far Infrared Absolute Spectrophotometer) instrument on the COBE (Cosmic Background Explorer) satellite.

  8. Relic density and CMB constraints on dark matter annihilation with Sommerfeld enhancement

    CERN Document Server

    Zavala, Jesus; White, Simon D M

    2009-01-01

    We calculate how the relic density of dark matter particles is altered when their annihilation is enhanced by the Sommerfeld mechanism due to a Yukawa interaction between the annihilating particles. Maintaining a dark matter abundance consistent with current observational bounds requires the normalization of the s-wave annihilation cross section to be decreased compared to a model without enhancement. The level of suppression depends on the specific parameters of the particle model, with the kinetic decoupling temperature having the most effect. We find that the cross section can be reduced by as much as an order of magnitude for extreme cases. We also compute the mu-type distortion of the CMB energy spectrum caused by energy injection from such Sommerfeld-enhanced annihilation. Our results indicate that in the vicinity of resonances, associated with bound states, distortions can be large enough to be excluded by the upper limit |mu|<9.0x10^(-5) found by the COBE/FIRAS experiment.

  9. Annihilation Radiation Gauge for Relative Density and Multiphase Fluid Monitoring

    Directory of Open Access Journals (Sweden)

    Vidal A.

    2014-03-01

    Full Text Available The knowledge of the multi-phase flow parameters are important for the petroleum industry, specifically during the transport in pipelines and network related to exploitation’s wells. Crude oil flow is studied by Monte Carlo simulation and experimentally to determine transient liquid phase in a laboratory system. Relative density and fluid phase time variation is monitored employing a fast nuclear data acquisition setup that includes two large volume BaF2 scintillator detectors coupled to an electronic chain and data display in a LabView® environment. Fluid parameters are determined by the difference in count rate of coincidence pulses. The operational characteristics of the equipment indicate that 2 % deviation in the CCR corresponds to a variation, on average, of 20 % in the fraction of liquid of the multiphase fluid.

  10. Interplay of gaugino (co)annihilation processes in the context of a precise relic density calculation

    CERN Document Server

    Harz, Julia; Klasen, Michael; Kovařík, Karol; Steppeler, Patrick

    2015-01-01

    The latest Planck data allow one to determine the dark matter relic density with previously unparalleled precision. In order to achieve a comparable precision on the theory side, we have calculated the full $\\mathcal{O}(\\alpha_s)$ corrections to the most relevant annihilation and coannihilation processes for relic density calculations within the Minimal Supersymmetric Standard Model (MSSM). The interplay of these processes is discussed. The impact of the radiative corrections on the resulting relic density is found to be larger than the experimental uncertainty of the Planck data.

  11. Randomness as an Equilibrium. Potential and Probability Density

    OpenAIRE

    2002-01-01

    Randomness is viewed through an analogy between a physical quantity, density of gas, and a mathematical construct -- probability density. Boltzmann's deduction of equilibrium distribution of ideal gas placed in an external potential field than provides a way of viewing probability density from a perspective of forces/potentials, hidden behind it.

  12. Hilbert Space of Probability Density Functions Based on Aitchison Geometry

    Institute of Scientific and Technical Information of China (English)

    J. J. EGOZCUE; J. L. D(I)AZ-BARRERO; V. PAWLOWSKY-GLAHN

    2006-01-01

    The set of probability functions is a convex subset of L1 and it does not have a linear space structure when using ordinary sum and multiplication by real constants. Moreover, difficulties arise when dealing with distances between densities. The crucial point is that usual distances are not invariant under relevant transformations of densities. To overcome these limitations, Aitchison's ideas on compositional data analysis are used, generalizing perturbation and power transformation, as well as the Aitchison inner product, to operations on probability density functions with support on a finite interval. With these operations at hand, it is shown that the set of bounded probability density functions on finite intervals is a pre-Hilbert space. A Hilbert space of densities, whose logarithm is square-integrable, is obtained as the natural completion of the pre-Hilbert space.

  13. Impact of SUSY-QCD corrections on neutralino-stop co-annihilation and the neutralino relic density

    CERN Document Server

    Harz, J; Klasen, M; Kovarik, K; Boulc'h, Q Le

    2013-01-01

    We have calculated the full O(alpha_s) supersymmetric QCD corrections to neutralino-stop co-annihilation into electroweak vector and Higgs bosons within the Minimal Supersymmetric Standard Model (MSSM). We performed a parameter study within the phenomenological MSSM and demonstrated that the studied co-annihilation processes are phenomenologically relevant, especially in the context of a 126 GeV Higgs-like particle. By means of an example scenario we discuss the effect of the full next-to-leading order corrections on the co-annihilation cross section and show their impact on the predicted neutralino relic density. We demonstrate that the impact of these corrections on the cosmologically preferred region of parameter space is larger than the current experimental uncertainty of WMAP data.

  14. Failure Analysis of Wind Turbines by Probability Density Evolution Method

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Liu, W.F.

    2013-01-01

    The aim of this study is to present an efficient and accurate method for estimation of the failure probability of wind turbine structures which work under turbulent wind load. The classical method for this is to fit one of the extreme value probability distribution functions to the extracted maxima....... This is not practical due to its excessive computational load. This problem can alternatively be tackled if the evolution of the probability density function (PDF) of the response process can be realized. The evolutionary PDF can then be integrated on the boundaries of the problem. For this reason we propose to use...... the Probability Density Evolution Method (PDEM). PDEM can alternatively be used to obtain the distribution of the extreme values of the response process by simulation. This approach requires less computational effort than integrating the evolution of the PDF; but may be less accurate. In this paper we present...

  15. A Probability Density Function for Neutrino Masses and Mixings

    CERN Document Server

    Fortin, Jean-François; Marleau, Luc

    2016-01-01

    The anarchy principle leading to the see-saw ensemble is studied analytically with the usual tools of random matrix theory. The probability density function for the see-saw ensemble of $N\\times N$ matrices is obtained in terms of a multidimensional integral. This integral involves all light neutrino masses, leading to a complicated probability density function. It is shown that the probability density function for the neutrino mixing angles and phases is the appropriate Haar measure. The decoupling of the light neutrino masses and neutrino mixings implies no correlation between the neutrino mass eigenstates and the neutrino mixing matrix, in contradiction with observations but in agreement with some of the claims found in the literature.

  16. Probability density function for neutrino masses and mixings

    Science.gov (United States)

    Fortin, Jean-François; Giasson, Nicolas; Marleau, Luc

    2016-12-01

    The anarchy principle leading to the seesaw ensemble is studied analytically with the usual tools of random matrix theory. The probability density function for the seesaw ensemble of N ×N matrices is obtained in terms of a multidimensional integral. This integral involves all light neutrino masses, leading to a complicated probability density function. It is shown that the probability density function for the neutrino mixing angles and phases is the appropriate Haar measure. The decoupling of the light neutrino masses and neutrino mixings implies no correlation between the neutrino mass eigenstates and the neutrino mixing matrix and leads to a loss of predictive power when comparing with observations. This decoupling is in agreement with some of the claims found in the literature.

  17. Probability density function modeling for sub-powered interconnects

    Science.gov (United States)

    Pater, Flavius; Amaricǎi, Alexandru

    2016-06-01

    This paper proposes three mathematical models for reliability probability density function modeling the interconnect supplied at sub-threshold voltages: spline curve approximations, Gaussian models,and sine interpolation. The proposed analysis aims at determining the most appropriate fitting for the switching delay - probability of correct switching for sub-powered interconnects. We compare the three mathematical models with the Monte-Carlo simulations of interconnects for 45 nm CMOS technology supplied at 0.25V.

  18. Field modulation in Na-incorporated Cu(In,Ga)Se2 (CIGS) polycrystalline films influenced by alloy-hardening and pair-annihilation probabilities.

    Science.gov (United States)

    Jeong, Yonkil; Kim, Chae-Woong; Park, Dong-Won; Jung, Seung Chul; Lee, Jongjin; Shim, Hee-Sang

    2011-11-07

    The influence of Na on Cu(In,Ga)Se2 (CIGS) solar cells was investigated. A gradient profile of the Na in the CIGS absorber layer can induce an electric field modulation and significantly strengthen the back surface field effect. This field modulation originates from a grain growth model introduced by a combination of alloy-hardening and pair-annihilation probabilities, wherein the Cu supply and Na diffusion together screen the driving force of the grain boundary motion (GBM) by alloy hardening, which indicates a specific GBM pinning by Cu and Na. The pair annihilation between the ubiquitously evolving GBMs has a coincident probability with the alloy-hardening event.PACS: 88. 40. H-, 81. 10. Aj, 81. 40. Cd.

  19. An homotopy of isometries related to a probability density

    CERN Document Server

    Groux, Roland

    2011-01-01

    We are studying here a family of probability density functions indexed by a real parameter, and constructed from homographic relations between associated Stieltjes transforms. From the analysis of orthogonal polynomials we deduce a family of isometries in relation to the classical operators creating secondary polynomials and we give an application to the explicit resolution of specific integral equations.

  20. Visualization techniques for spatial probability density function data

    Directory of Open Access Journals (Sweden)

    Udeepta D Bordoloi

    2006-01-01

    Full Text Available Novel visualization methods are presented for spatial probability density function data. These are spatial datasets, where each pixel is a random variable, and has multiple samples which are the results of experiments on that random variable. We use clustering as a means to reduce the information contained in these datasets; and present two different ways of interpreting and clustering the data. The clustering methods are used on two datasets, and the results are discussed with the help of visualization techniques designed for the spatial probability data.

  1. Estimation of probability densities using scale-free field theories.

    Science.gov (United States)

    Kinney, Justin B

    2014-07-01

    The question of how best to estimate a continuous probability density from finite data is an intriguing open problem at the interface of statistics and physics. Previous work has argued that this problem can be addressed in a natural way using methods from statistical field theory. Here I describe results that allow this field-theoretic approach to be rapidly and deterministically computed in low dimensions, making it practical for use in day-to-day data analysis. Importantly, this approach does not impose a privileged length scale for smoothness of the inferred probability density, but rather learns a natural length scale from the data due to the tradeoff between goodness of fit and an Occam factor. Open source software implementing this method in one and two dimensions is provided.

  2. Probability distribution functions in the finite density lattice QCD

    CERN Document Server

    Ejiri, S; Aoki, S; Kanaya, K; Saito, H; Hatsuda, T; Ohno, H; Umeda, T

    2012-01-01

    We study the phase structure of QCD at high temperature and density by lattice QCD simulations adopting a histogram method. We try to solve the problems which arise in the numerical study of the finite density QCD, focusing on the probability distribution function (histogram). As a first step, we investigate the quark mass dependence and the chemical potential dependence of the probability distribution function as a function of the Polyakov loop when all quark masses are sufficiently large, and study the properties of the distribution function. The effect from the complex phase of the quark determinant is estimated explicitly. The shape of the distribution function changes with the quark mass and the chemical potential. Through the shape of the distribution, the critical surface which separates the first order transition and crossover regions in the heavy quark region is determined for the 2+1-flavor case.

  3. Impact of SUSY-QCD corrections on neutralino-stop co-annihilation and the neutralino relic density

    Energy Technology Data Exchange (ETDEWEB)

    Harz, Julia [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Herrmann, Bjoern [Savoie Univ./CNRS, Annecy-le-Vieux (France). LAPTh; Klasen, Michael [Muenster Univ. (Germany). Inst. fuer Theoretische Physik 1; Kovarik, Karol [Karlsruher Institut fuer Technologie, Karlsruhe (Germany). Inst. fuer Theoretische Physik; Le Boulc' h, Quentin [Grenoble Univ./CNRS-IN2P3/INPG, Grenoble (France). Lab. de Physique Subatomique et de Cosmologie

    2013-02-15

    We have calculated the full O({alpha}{sub s}) supersymmetric QCD corrections to neutralino-stop coannihilation into electroweak vector and Higgs bosons within the Minimal Supersymmetric Standard Model (MSSM).We performed a parameter study within the phenomenological MSSM and demonstrated that the studied co-annihilation processes are phenomenologically relevant, especially in the context of a 126 GeV Higgs-like particle. By means of an example scenario we discuss the effect of the full next-to-leading order corrections on the co-annihilation cross section and show their impact on the predicted neutralino relic density. We demonstrate that the impact of these corrections on the cosmologically preferred region of parameter space is larger than the current experimental uncertainty of WMAP data.

  4. Can the relic density of self-interacting dark matter be due to annihilations into Standard Model particles?

    CERN Document Server

    Chu, Xiaoyong; Hambye, Thomas

    2016-01-01

    Motivated by the hypothesis that dark matter self-interactions provide a solution to the small-scale structure formation problems, we investigate the possibilities that the relic density of a self-interacting dark matter candidate can proceed from the thermal freeze-out of annihilations into Standard Model particles. We find that scalar and Majorana dark matter in the mass range of $10-500$ MeV, coupled to a slightly heavier massive gauge boson, are the only possible candidates in agreement with multiple current experimental constraints. Here dark matter annihilations take place at a much slower rate than the self-interactions simply because the interaction connecting the Standard Model and the dark matter sectors is small. We also discuss prospects of establishing or excluding these two scenarios in future experiments.

  5. INTERACTIVE VISUALIZATION OF PROBABILITY AND CUMULATIVE DENSITY FUNCTIONS

    KAUST Repository

    Potter, Kristin

    2012-01-01

    The probability density function (PDF), and its corresponding cumulative density function (CDF), provide direct statistical insight into the characterization of a random process or field. Typically displayed as a histogram, one can infer probabilities of the occurrence of particular events. When examining a field over some two-dimensional domain in which at each point a PDF of the function values is available, it is challenging to assess the global (stochastic) features present within the field. In this paper, we present a visualization system that allows the user to examine two-dimensional data sets in which PDF (or CDF) information is available at any position within the domain. The tool provides a contour display showing the normed difference between the PDFs and an ansatz PDF selected by the user and, furthermore, allows the user to interactively examine the PDF at any particular position. Canonical examples of the tool are provided to help guide the reader into the mapping of stochastic information to visual cues along with a description of the use of the tool for examining data generated from an uncertainty quantification exercise accomplished within the field of electrophysiology.

  6. Probability Density and CFAR Threshold Estimation for Hyperspectral Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Clark, G A

    2004-09-21

    The work reported here shows the proof of principle (using a small data set) for a suite of algorithms designed to estimate the probability density function of hyperspectral background data and compute the appropriate Constant False Alarm Rate (CFAR) matched filter decision threshold for a chemical plume detector. Future work will provide a thorough demonstration of the algorithms and their performance with a large data set. The LASI (Large Aperture Search Initiative) Project involves instrumentation and image processing for hyperspectral images of chemical plumes in the atmosphere. The work reported here involves research and development on algorithms for reducing the false alarm rate in chemical plume detection and identification algorithms operating on hyperspectral image cubes. The chemical plume detection algorithms to date have used matched filters designed using generalized maximum likelihood ratio hypothesis testing algorithms [1, 2, 5, 6, 7, 12, 10, 11, 13]. One of the key challenges in hyperspectral imaging research is the high false alarm rate that often results from the plume detector [1, 2]. The overall goal of this work is to extend the classical matched filter detector to apply Constant False Alarm Rate (CFAR) methods to reduce the false alarm rate, or Probability of False Alarm P{sub FA} of the matched filter [4, 8, 9, 12]. A detector designer is interested in minimizing the probability of false alarm while simultaneously maximizing the probability of detection P{sub D}. This is summarized by the Receiver Operating Characteristic Curve (ROC) [10, 11], which is actually a family of curves depicting P{sub D} vs. P{sub FA}parameterized by varying levels of signal to noise (or clutter) ratio (SNR or SCR). Often, it is advantageous to be able to specify a desired P{sub FA} and develop a ROC curve (P{sub D} vs. decision threshold r{sub 0}) for that case. That is the purpose of this work. Specifically, this work develops a set of algorithms and MATLAB

  7. Accurate photometric redshift probability density estimation - method comparison and application

    CERN Document Server

    Rau, Markus Michael; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben

    2015-01-01

    We introduce an ordinal classification algorithm for photometric redshift estimation, which vastly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, that can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitudes less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular Neural Network code (ANNz). In our use case, this improvemen...

  8. Efficiency issues related to probability density function comparison

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, P.M.; Cannon, M.; Barros, J.E.

    1996-03-01

    The CANDID project (Comparison Algorithm for Navigating Digital Image Databases) employs probability density functions (PDFs) of localized feature information to represent the content of an image for search and retrieval purposes. A similarity measure between PDFs is used to identify database images that are similar to a user-provided query image. Unfortunately, signature comparison involving PDFs is a very time-consuming operation. In this paper, we look into some efficiency considerations when working with PDFS. Since PDFs can take on many forms, we look into tradeoffs between accurate representation and efficiency of manipulation for several data sets. In particular, we typically represent each PDF as a Gaussian mixture (e.g. as a weighted sum of Gaussian kernels) in the feature space. We find that by constraining all Gaussian kernels to have principal axes that are aligned to the natural axes of the feature space, computations involving these PDFs are simplified. We can also constrain the Gaussian kernels to be hyperspherical rather than hyperellipsoidal, simplifying computations even further, and yielding an order of magnitude speedup in signature comparison. This paper illustrates the tradeoffs encountered when using these constraints.

  9. Interpolation of probability densities in ENDF and ENDL

    Energy Technology Data Exchange (ETDEWEB)

    Hedstrom, G

    2006-01-27

    Suppose that we are given two probability densities p{sub 0}(E{prime}) and p{sub 1}(E{prime}) for the energy E{prime} of an outgoing particle, p{sub 0}(E{prime}) corresponding to energy E{sub 0} of the incident particle and p{sub 1}(E{prime}) corresponding to incident energy E{sub 1}. If E{sub 0} < E{sub 1}, the problem is how to define p{sub {alpha}}(E{prime}) for intermediate incident energies E{sub {alpha}} = (1 - {alpha})E{sub 0} + {alpha}E{sub 1} with 0 < {alpha} < 1. In this note the author considers three ways to do it. They begin with unit-base interpolation, which is standard in ENDL and is sometimes used in ENDF. They then describe the equiprobable bins used by some Monte Carlo codes. They then close with a discussion of interpolation by corresponding-points, which is commonly used in ENDF.

  10. Numerical methods for high-dimensional probability density function equations

    Energy Technology Data Exchange (ETDEWEB)

    Cho, H. [Department of Mathematics, University of Maryland College Park, College Park, MD 20742 (United States); Venturi, D. [Department of Applied Mathematics and Statistics, University of California Santa Cruz, Santa Cruz, CA 95064 (United States); Karniadakis, G.E., E-mail: gk@dam.brown.edu [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States)

    2016-01-15

    In this paper we address the problem of computing the numerical solution to kinetic partial differential equations involving many phase variables. These types of equations arise naturally in many different areas of mathematical physics, e.g., in particle systems (Liouville and Boltzmann equations), stochastic dynamical systems (Fokker–Planck and Dostupov–Pugachev equations), random wave theory (Malakhov–Saichev equations) and coarse-grained stochastic systems (Mori–Zwanzig equations). We propose three different classes of new algorithms addressing high-dimensionality: The first one is based on separated series expansions resulting in a sequence of low-dimensional problems that can be solved recursively and in parallel by using alternating direction methods. The second class of algorithms relies on truncation of interaction in low-orders that resembles the Bogoliubov–Born–Green–Kirkwood–Yvon (BBGKY) framework of kinetic gas theory and it yields a hierarchy of coupled probability density function equations. The third class of algorithms is based on high-dimensional model representations, e.g., the ANOVA method and probabilistic collocation methods. A common feature of all these approaches is that they are reducible to the problem of computing the solution to high-dimensional equations via a sequence of low-dimensional problems. The effectiveness of the new algorithms is demonstrated in numerical examples involving nonlinear stochastic dynamical systems and partial differential equations, with up to 120 variables.

  11. Numerical methods for high-dimensional probability density function equations

    Science.gov (United States)

    Cho, H.; Venturi, D.; Karniadakis, G. E.

    2016-01-01

    In this paper we address the problem of computing the numerical solution to kinetic partial differential equations involving many phase variables. These types of equations arise naturally in many different areas of mathematical physics, e.g., in particle systems (Liouville and Boltzmann equations), stochastic dynamical systems (Fokker-Planck and Dostupov-Pugachev equations), random wave theory (Malakhov-Saichev equations) and coarse-grained stochastic systems (Mori-Zwanzig equations). We propose three different classes of new algorithms addressing high-dimensionality: The first one is based on separated series expansions resulting in a sequence of low-dimensional problems that can be solved recursively and in parallel by using alternating direction methods. The second class of algorithms relies on truncation of interaction in low-orders that resembles the Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) framework of kinetic gas theory and it yields a hierarchy of coupled probability density function equations. The third class of algorithms is based on high-dimensional model representations, e.g., the ANOVA method and probabilistic collocation methods. A common feature of all these approaches is that they are reducible to the problem of computing the solution to high-dimensional equations via a sequence of low-dimensional problems. The effectiveness of the new algorithms is demonstrated in numerical examples involving nonlinear stochastic dynamical systems and partial differential equations, with up to 120 variables.

  12. Interactive design of probability density functions for shape grammars

    KAUST Repository

    Dang, Minh

    2015-11-02

    A shape grammar defines a procedural shape space containing a variety of models of the same class, e.g. buildings, trees, furniture, airplanes, bikes, etc. We present a framework that enables a user to interactively design a probability density function (pdf) over such a shape space and to sample models according to the designed pdf. First, we propose a user interface that enables a user to quickly provide preference scores for selected shapes and suggest sampling strategies to decide which models to present to the user to evaluate. Second, we propose a novel kernel function to encode the similarity between two procedural models. Third, we propose a framework to interpolate user preference scores by combining multiple techniques: function factorization, Gaussian process regression, autorelevance detection, and l1 regularization. Fourth, we modify the original grammars to generate models with a pdf proportional to the user preference scores. Finally, we provide evaluations of our user interface and framework parameters and a comparison to other exploratory modeling techniques using modeling tasks in five example shape spaces: furniture, low-rise buildings, skyscrapers, airplanes, and vegetation.

  13. Parameterizing deep convection using the assumed probability density function method

    Directory of Open Access Journals (Sweden)

    R. L. Storer

    2014-06-01

    Full Text Available Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.

  14. Stochastic chaos induced by diffusion processes with identical spectral density but different probability density functions.

    Science.gov (United States)

    Lei, Youming; Zheng, Fan

    2016-12-01

    Stochastic chaos induced by diffusion processes, with identical spectral density but different probability density functions (PDFs), is investigated in selected lightly damped Hamiltonian systems. The threshold amplitude of diffusion processes for the onset of chaos is derived by using the stochastic Melnikov method together with a mean-square criterion. Two quasi-Hamiltonian systems, namely, a damped single pendulum and damped Duffing oscillator perturbed by stochastic excitations, are used as illustrative examples. Four different cases of stochastic processes are taking as the driving excitations. It is shown that in such two systems the spectral density of diffusion processes completely determines the threshold amplitude for chaos, regardless of the shape of their PDFs, Gaussian or otherwise. Furthermore, the mean top Lyapunov exponent is employed to verify analytical results. The results obtained by numerical simulations are in accordance with the analytical results. This demonstrates that the stochastic Melnikov method is effective in predicting the onset of chaos in the quasi-Hamiltonian systems.

  15. Assumed Probability Density Functions for Shallow and Deep Convection

    Directory of Open Access Journals (Sweden)

    Steven K Krueger

    2010-10-01

    Full Text Available The assumed joint probability density function (PDF between vertical velocity and conserved temperature and total water scalars has been suggested to be a relatively computationally inexpensive and unified subgrid-scale (SGS parameterization for boundary layer clouds and turbulent moments. This paper analyzes the performance of five families of PDFs using large-eddy simulations of deep convection, shallow convection, and a transition from stratocumulus to trade wind cumulus. Three of the PDF families are based on the double Gaussian form and the remaining two are the single Gaussian and a Double Delta Function (analogous to a mass flux model. The assumed PDF method is tested for grid sizes as small as 0.4 km to as large as 204.8 km. In addition, studies are performed for PDF sensitivity to errors in the input moments and for how well the PDFs diagnose some higher-order moments. In general, the double Gaussian PDFs more accurately represent SGS cloud structure and turbulence moments in the boundary layer compared to the single Gaussian and Double Delta Function PDFs for the range of grid sizes tested. This is especially true for small SGS cloud fractions. While the most complex PDF, Lewellen-Yoh, better represents shallow convective cloud properties (cloud fraction and liquid water mixing ratio compared to the less complex Analytic Double Gaussian 1 PDF, there appears to be no advantage in implementing Lewellen-Yoh for deep convection. However, the Analytic Double Gaussian 1 PDF better represents the liquid water flux, is less sensitive to errors in the input moments, and diagnoses higher order moments more accurately. Between the Lewellen-Yoh and Analytic Double Gaussian 1 PDFs, it appears that neither family is distinctly better at representing cloudy layers. However, due to the reduced computational cost and fairly robust results, it appears that the Analytic Double Gaussian 1 PDF could be an ideal family for SGS cloud and turbulence

  16. INVESTIGATION OF MICROSTRUCTURE AND CONDUCTIVE MECHANISM OF HIGH DENSITY POLYETHYLENE/CARBON BLACK PARTICLE COMPOSITE BY POSITRON ANNIHILATION LIFETIME SPECTROSCOPY

    Institute of Scientific and Technical Information of China (English)

    Yang-mei Fan; Xian-fenga Zhang; Bang-jiao Ye; Xian-yi Zhou; Hui-min Weng; Jiang-feng Du; Rong-dian Han; Shao-jin Jia; Zhi-cheng Zhang

    2002-01-01

    The microstmcture and conductive mechanism of high density polyethylene/carbon black (HDPE/CB) composite were investigated by positron annihilation lifetime spectroscopy (PALS). The PALS were measured in two series of samples,one with various CB contents in the composites and the other with various γ-irradiation doses in HDPE/CB composite containing 20 wt% CB. It was found that CB particles distribute in the amorphous regions, the CB critical content value in HDPE/CB composite is about 16.7 wt% and the suitable γ-irradiation dose for improving the conductive behavior of HDPE/CB composite is about 20 Mrad. The result observed for the second set of samples suggests that γ-irradiation causes not only cross-linking in amorphous regions but also destruction of the partial crystalline structure. Therefore, a suitable irradiation dose, about 20 Mrad, can induce sufficient cross-linking in the amorphous regions without enhancing the decomposition of crystalline structure, so that the positive temperature coefficient (PTC) effect remains while the negative temperature coefficient (NTC) effect is suppressed. A new interpretation of the conductive mechanism, which might provide a more detailed explanation of the PTC effect and the NTC effect has been proposed.

  17. Density Profiles of CDM Microhalos and their Implications for Annihilation Boost Factors

    CERN Document Server

    Anderhalden, Donnino

    2013-01-01

    In a standard cold dark matter (CDM) cosmology, microhalos at the CDM cutoff scale are the first and smallest objects expected to form in the universe. Here we present results of high resolution simulations of three representative roughly Earth-mass microhalos in order to determine their inner density profile. We find that CDM microhalos in simulations without a cutoff in the power spectrum roughly follow the NFW density profile, just like the much larger CDM halos on galaxy and galaxy cluster scales. But having a cutoff in the initial power spectrum at a typical neutralino free streaming scale of $10^{-7} M_{\\odot}$ makes their inner density profiles considerably steeper, i.e. $\\rho \\propto r^{-(1.3-1.4)}$, in good agreement with the results from Ishiyama et al. (2010). An extrapolation of the halo and subhalo mass functions down to the cutoff scale indicates that microhalos are extremely abundant throughout the present day dark matter distribution and might contribute significantly to indirect dark matter d...

  18. Multispecies pair annihilation reactions.

    Science.gov (United States)

    Deloubrière, Olivier; Hilhorst, Henk J; Täuber, Uwe C

    2002-12-16

    We consider diffusion-limited reactions A(i)+A(j)--> (12 and d> or =2, we argue that the asymptotic density decay for such mutual annihilation processes with equal rates and initial densities is the same as for single-species pair annihilation A+A-->. In d=1, however, particle segregation occurs for all q< infinity. The total density decays according to a q dependent power law, rho(t) approximately t(-alpha(q)). Within a simplified version of the model alpha(q)=(q-1)/2q can be determined exactly. Our findings are supported through Monte Carlo simulations.

  19. Particle number and probability density functional theory and A-representability.

    Science.gov (United States)

    Pan, Xiao-Yin; Sahni, Viraht

    2010-04-28

    In Hohenberg-Kohn density functional theory, the energy E is expressed as a unique functional of the ground state density rho(r): E = E[rho] with the internal energy component F(HK)[rho] being universal. Knowledge of the functional F(HK)[rho] by itself, however, is insufficient to obtain the energy: the particle number N is primary. By emphasizing this primacy, the energy E is written as a nonuniversal functional of N and probability density p(r): E = E[N,p]. The set of functions p(r) satisfies the constraints of normalization to unity and non-negativity, exists for each N; N = 1, ..., infinity, and defines the probability density or p-space. A particle number N and probability density p(r) functional theory is constructed. Two examples for which the exact energy functionals E[N,p] are known are provided. The concept of A-representability is introduced, by which it is meant the set of functions Psi(p) that leads to probability densities p(r) obtained as the quantum-mechanical expectation of the probability density operator, and which satisfies the above constraints. We show that the set of functions p(r) of p-space is equivalent to the A-representable probability density set. We also show via the Harriman and Gilbert constructions that the A-representable and N-representable probability density p(r) sets are equivalent.

  20. Constraints on an annihilation signal from a core of constant dark matter density around the milky way center with H.E.S.S.

    Science.gov (United States)

    Abramowski, A; Aharonian, F; Ait Benkhali, F; Akhperjanian, A G; Angüner, E O; Backes, M; Balenderan, S; Balzer, A; Barnacka, A; Becherini, Y; Becker Tjus, J; Berge, D; Bernhard, S; Bernlöhr, K; Birsin, E; Biteau, J; Böttcher, M; Boisson, C; Bolmont, J; Bordas, P; Bregeon, J; Brun, F; Brun, P; Bryan, M; Bulik, T; Carrigan, S; Casanova, S; Chadwick, P M; Chakraborty, N; Chalme-Calvet, R; Chaves, R C G; Chrétien, M; Colafrancesco, S; Cologna, G; Conrad, J; Couturier, C; Cui, Y; Davids, I D; Degrange, B; Deil, C; deWilt, P; Djannati-Ataï, A; Domainko, W; Donath, A; Drury, L O'C; Dubus, G; Dutson, K; Dyks, J; Dyrda, M; Edwards, T; Egberts, K; Eger, P; Espigat, P; Farnier, C; Fegan, S; Feinstein, F; Fernandes, M V; Fernandez, D; Fiasson, A; Fontaine, G; Förster, A; Füßling, M; Gabici, S; Gajdus, M; Gallant, Y A; Garrigoux, T; Giavitto, G; Giebels, B; Glicenstein, J F; Gottschall, D; Grondin, M-H; Grudzińska, M; Hadasch, D; Häffner, S; Hahn, J; Harris, J; Heinzelmann, G; Henri, G; Hermann, G; Hervet, O; Hillert, A; Hinton, J A; Hofmann, W; Hofverberg, P; Holler, M; Horns, D; Ivascenko, A; Jacholkowska, A; Jahn, C; Jamrozy, M; Janiak, M; Jankowsky, F; Jung-Richardt, I; Kastendieck, M A; Katarzyński, K; Katz, U; Kaufmann, S; Khélifi, B; Kieffer, M; Klepser, S; Klochkov, D; Kluźniak, W; Kolitzus, D; Komin, Nu; Kosack, K; Krakau, S; Krayzel, F; Krüger, P P; Laffon, H; Lamanna, G; Lefaucheur, J; Lefranc, V; Lemière, A; Lemoine-Goumard, M; Lenain, J-P; Lohse, T; Lopatin, A; Lu, C-C; Marandon, V; Marcowith, A; Marx, R; Maurin, G; Maxted, N; Mayer, M; McComb, T J L; Méhault, J; Meintjes, P J; Menzler, U; Meyer, M; Mitchell, A M W; Moderski, R; Mohamed, M; Morå, K; Moulin, E; Murach, T; de Naurois, M; Niemiec, J; Nolan, S J; Oakes, L; Odaka, H; Ohm, S; Opitz, B; Ostrowski, M; Oya, I; Panter, M; Parsons, R D; Paz Arribas, M; Pekeur, N W; Pelletier, G; Petrucci, P-O; Peyaud, B; Pita, S; Poon, H; Pühlhofer, G; Punch, M; Quirrenbach, A; Raab, S; Reichardt, I; Reimer, A; Reimer, O; Renaud, M; de Los Reyes, R; Rieger, F; Romoli, C; Rosier-Lees, S; Rowell, G; Rudak, B; Rulten, C B; Sahakian, V; Salek, D; Sanchez, D A; Santangelo, A; Schlickeiser, R; Schüssler, F; Schulz, A; Schwanke, U; Schwarzburg, S; Schwemmer, S; Sol, H; Spanier, F; Spengler, G; Spies, F; Stawarz, Ł; Steenkamp, R; Stegmann, C; Stinzing, F; Stycz, K; Sushch, I; Tavernet, J-P; Tavernier, T; Taylor, A M; Terrier, R; Tluczykont, M; Trichard, C; Valerius, K; van Eldik, C; van Soelen, B; Vasileiadis, G; Veh, J; Venter, C; Viana, A; Vincent, P; Vink, J; Völk, H J; Volpe, F; Vorster, M; Vuillaume, T; Wagner, S J; Wagner, P; Wagner, R M; Ward, M; Weidinger, M; Weitzel, Q; White, R; Wierzcholska, A; Willmann, P; Wörnlein, A; Wouters, D; Yang, R; Zabalza, V; Zaborov, D; Zacharias, M; Zdziarski, A A; Zech, A; Zechlin, H-S

    2015-02-27

    An annihilation signal of dark matter is searched for from the central region of the Milky Way. Data acquired in dedicated on-off observations of the Galactic center region with H.E.S.S. are analyzed for this purpose. No significant signal is found in a total of ∼9  h of on-off observations. Upper limits on the velocity averaged cross section, ⟨σv⟩, for the annihilation of dark matter particles with masses in the range of ∼300  GeV to ∼10  TeV are derived. In contrast to previous constraints derived from observations of the Galactic center region, the constraints that are derived here apply also under the assumption of a central core of constant dark matter density around the center of the Galaxy. Values of ⟨σv⟩ that are larger than 3×10^{-24}  cm^{3}/s are excluded for dark matter particles with masses between ∼1 and ∼4  TeV at 95% C.L. if the radius of the central dark matter density core does not exceed 500 pc. This is the strongest constraint that is derived on ⟨σv⟩ for annihilating TeV mass dark matter without the assumption of a centrally cusped dark matter density distribution in the search region.

  1. Non-Maxwellian probability density function of fibers with lumped polarization mode dispersion elements.

    Science.gov (United States)

    Antonelli, Cristian; Mecozzi, Antonio

    2004-05-15

    We give an analytical expression for the probability density function of the differential group delay for a concatenation of Maxwellian fiber sections and an arbitrary number of lumped elements with constant and isotropically oriented birefringence. When the contribution of the average squared of the constant birefringence elements is a significant fraction of the total, we show that the outage probability can be significantly overestimated if the probability density function of the differential group delay is approximated by a Maxwellian distribution.

  2. On the discretization of probability density functions and the continuous Rényi entropy

    Indian Academy of Sciences (India)

    Diógenes Campos

    2015-12-01

    On the basis of second mean-value theorem (SMVT) for integrals, a discretization method is proposed with the aim of representing the expectation value of a function with respect to a probability density function in terms of the discrete probability theory. This approach is applied to the continuous Rényi entropy, and it is established that a discrete probability distribution can be associated to it in a very natural way. The probability density functions for the linear superposition of two coherent states is used for developing a representative example.

  3. probably

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    【说词】1. He can probably tell us the truth.2. Will it rain this afternoong ? Probably【解语】作副词,意为“大概、或许”,表示可能性很大,通常指根据目前情况作出积极推测或判断;

  4. The Influence of Phonotactic Probability and Neighborhood Density on Children's Production of Newly Learned Words

    Science.gov (United States)

    Heisler, Lori; Goffman, Lisa

    2016-01-01

    A word learning paradigm was used to teach children novel words that varied in phonotactic probability and neighborhood density. The effects of frequency and density on speech production were examined when phonetic forms were nonreferential (i.e., when no referent was attached) and when phonetic forms were referential (i.e., when a referent was…

  5. Moment-independent importance measure of basic random variable and its probability density evolution solution

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    To analyze the effect of basic variable on failure probability in reliability analysis,a moment-independent importance measure of the basic random variable is proposed,and its properties are analyzed and verified.Based on this work,the importance measure of the basic variable on the failure probability is compared with that on the distribution density of the response.By use of the probability density evolution method,a solution is established to solve two importance measures,which can efficiently avoid the difficulty in solving the importance measures.Some numerical examples and engineering examples are used to demonstrate the proposed importance measure on the failure probability and that on the distribution density of the response.The results show that the proposed importance measure can effectively describe the effect of the basic variable on the failure probability from the distribution density of the basic variable.Additionally,the results show that the established solution on the probability density evolution is efficient for the importance measures.

  6. The force distribution probability function for simple fluids by density functional theory.

    Science.gov (United States)

    Rickayzen, G; Heyes, D M

    2013-02-28

    Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.

  7. Linearized Controller Design for the Output Probability Density Functions of Non-Gaussian Stochastic Systems

    Institute of Scientific and Technical Information of China (English)

    Pousga Kabore; Husam Baki; Hong Yue; Hong Wang

    2005-01-01

    This paper presents a linearized approach for the controller design of the shape of output probability density functions for general stochastic systems. A square root approximation to an output probability density function is realized by a set of B-spline functions. This generally produces a nonlinear state space model for the weights of the B-spline approximation. A linearized model is therefore obtained and embedded into a performance function that measures the tracking error of the output probability density function with respect to a given distribution. By using this performance function as a Lyapunov function for the closed loop system, a feedback control input has been obtained which guarantees closed loop stability and realizes perfect tracking. The algorithm described in this paper has been tested on a simulated example and desired results have been achieved.

  8. Constraints on an Annihilation Signal from a Core of Constant Dark Matter Density around the Milky Way Center with H.E.S.S

    CERN Document Server

    ,

    2015-01-01

    An annihilation signal of dark matter is searched for from the central region of the Milky Way. Data acquired in dedicated ON/OFF observations of the Galactic center region with H.E.S.S. are analyzed for this purpose. No significant signal is found in a total of $\\sim 9$ h of ON/OFF observations. Upper limits on the velocity averaged cross section, $$, for the annihilation of dark matter particles with masses in the range of $\\sim 300$ GeV to $\\sim 10$ TeV are derived. In contrast to previous constraints derived from observations of the Galactic center region, the constraints that are derived here apply also under the assumption of a central core of constant dark matter density around the center of the Galaxy. Values of $$ that are larger than $3\\cdot 10^{-24}\\:\\mathrm{cm^3/s}$ are excluded for dark matter particles with masses between $\\sim 1$ and $\\sim 4$ TeV at $95%$ CL if the radius of the central dark matter density core does not exceed $500$ pc. This is the strongest constraint that is derived on $$ for...

  9. Modelling the Probability Density Function of IPTV Traffic Packet Delay Variation

    Directory of Open Access Journals (Sweden)

    Michal Halas

    2012-01-01

    Full Text Available This article deals with modelling the Probability density function of IPTV traffic packet delay variation. The use of this modelling is in an efficient de-jitter buffer estimation. When an IP packet travels across a network, it experiences delay and its variation. This variation is caused by routing, queueing systems and other influences like the processing delay of the network nodes. When we try to separate these at least three types of delay variation, we need a way to measure these types separately. This work is aimed to the delay variation caused by queueing systems which has the main implications to the form of the Probability density function.

  10. Unification of Field Theory and Maximum Entropy Methods for Learning Probability Densities

    CERN Document Server

    Kinney, Justin B

    2014-01-01

    Bayesian field theory and maximum entropy are two methods for learning smooth probability distributions (a.k.a. probability densities) from finite sampled data. Both methods were inspired by statistical physics, but the relationship between them has remained unclear. Here I show that Bayesian field theory subsumes maximum entropy density estimation. In particular, the most common maximum entropy methods are shown to be limiting cases of Bayesian inference using field theory priors that impose no boundary conditions on candidate densities. This unification provides a natural way to test the validity of the maximum entropy assumption on one's data. It also provides a better-fitting nonparametric density estimate when the maximum entropy assumption is rejected.

  11. The influence of phonotactic probability and neighborhood density on children's production of newly learned words.

    Science.gov (United States)

    Heisler, Lori; Goffman, Lisa

    A word learning paradigm was used to teach children novel words that varied in phonotactic probability and neighborhood density. The effects of frequency and density on speech production were examined when phonetic forms were non-referential (i.e., when no referent was attached) and when phonetic forms were referential (i.e., when a referent was attached through fast mapping). Two methods of analysis were included: (1) kinematic variability of speech movement patterning; and (2) measures of segmental accuracy. Results showed that phonotactic frequency influenced the stability of movement patterning whereas neighborhood density influenced phoneme accuracy. Motor learning was observed in both non-referential and referential novel words. Forms with low phonotactic probability and low neighborhood density showed a word learning effect when a referent was assigned during fast mapping. These results elaborate on and specify the nature of interactivity observed across lexical, phonological, and articulatory domains.

  12. A new formulation of the probability density function in random walk models for atmospheric dispersion

    DEFF Research Database (Denmark)

    Falk, Anne Katrine Vinther; Gryning, Sven-Erik

    1997-01-01

    In this model for atmospheric dispersion particles are simulated by the Langevin Equation, which is a stochastic differential equation. It uses the probability density function (PDF) of the vertical velocity fluctuations as input. The PDF is constructed as an expansion after Hermite polynomials...

  13. Influence of Phonotactic Probability/Neighbourhood Density on Lexical Learning in Late Talkers

    Science.gov (United States)

    MacRoy-Higgins, Michelle; Schwartz, Richard G.; Shafer, Valerie L.; Marton, Klara

    2013-01-01

    Background: Toddlers who are late talkers demonstrate delays in phonological and lexical skills. However, the influence of phonological factors on lexical acquisition in toddlers who are late talkers has not been examined directly. Aims: To examine the influence of phonotactic probability/neighbourhood density on word learning in toddlers who were…

  14. Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function

    Science.gov (United States)

    Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.

    2011-01-01

    In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.

  15. Use of ELVIS II platform for random process modelling and analysis of its probability density function

    Science.gov (United States)

    Maslennikova, Yu. S.; Nugmanov, I. S.

    2016-08-01

    The problem of probability density function estimation for a random process is one of the most common in practice. There are several methods to solve this problem. Presented laboratory work uses methods of the mathematical statistics to detect patterns in the realization of random process. On the basis of ergodic theory, we construct algorithm for estimating univariate probability density distribution function for a random process. Correlational analysis of realizations is applied to estimate the necessary size of the sample and the time of observation. Hypothesis testing for two probability distributions (normal and Cauchy) is used on the experimental data, using χ2 criterion. To facilitate understanding and clarity of the problem solved, we use ELVIS II platform and LabVIEW software package that allows us to make the necessary calculations, display results of the experiment and, most importantly, to control the experiment. At the same time students are introduced to a LabVIEW software package and its capabilities.

  16. Analysis of 2-d ultrasound cardiac strain imaging using joint probability density functions.

    Science.gov (United States)

    Ma, Chi; Varghese, Tomy

    2014-06-01

    Ultrasound frame rates play a key role for accurate cardiac deformation tracking. Insufficient frame rates lead to an increase in signal de-correlation artifacts resulting in erroneous displacement and strain estimation. Joint probability density distributions generated from estimated axial strain and its associated signal-to-noise ratio provide a useful approach to assess the minimum frame rate requirements. Previous reports have demonstrated that bi-modal distributions in the joint probability density indicate inaccurate strain estimation over a cardiac cycle. In this study, we utilize similar analysis to evaluate a 2-D multi-level displacement tracking and strain estimation algorithm for cardiac strain imaging. The effect of different frame rates, final kernel dimensions and a comparison of radio frequency and envelope based processing are evaluated using echo signals derived from a 3-D finite element cardiac model and five healthy volunteers. Cardiac simulation model analysis demonstrates that the minimum frame rates required to obtain accurate joint probability distributions for the signal-to-noise ratio and strain, for a final kernel dimension of 1 λ by 3 A-lines, was around 42 Hz for radio frequency signals. On the other hand, even a frame rate of 250 Hz with envelope signals did not replicate the ideal joint probability distribution. For the volunteer study, clinical data was acquired only at a 34 Hz frame rate, which appears to be sufficient for radio frequency analysis. We also show that an increase in the final kernel dimensions significantly affect the strain probability distribution and joint probability density function generated, with a smaller effect on the variation in the accumulated mean strain estimated over a cardiac cycle. Our results demonstrate that radio frequency frame rates currently achievable on clinical cardiac ultrasound systems are sufficient for accurate analysis of the strain probability distribution, when a multi-level 2-D

  17. Evolving Molecular Cloud Structure and the Column Density Probability Distribution Function

    CERN Document Server

    Ward, Rachel L; Sills, Alison

    2014-01-01

    The structure of molecular clouds can be characterized with the probability distribution function (PDF) of the mass surface density. In particular, the properties of the distribution can reveal the nature of the turbulence and star formation present inside the molecular cloud. In this paper, we explore how these structural characteristics evolve with time and also how they relate to various cloud properties as measured from a sample of synthetic column density maps of molecular clouds. We find that, as a cloud evolves, the peak of its column density PDF will shift to surface densities below the observational threshold for detection, resulting in an underlying lognormal distribution which has been effectively lost at late times. Our results explain why certain observations of actively star-forming, dynamically older clouds, such as the Orion molecular cloud, do not appear to have any evidence of a lognormal distribution in their column density PDFs. We also study the evolution of the slope and deviation point ...

  18. Kernel density estimation and marginalized-particle based probability hypothesis density filter for multi-target tracking

    Institute of Scientific and Technical Information of China (English)

    张路平; 王鲁平; 李飚; 赵明

    2015-01-01

    In order to improve the performance of the probability hypothesis density (PHD) algorithm based particle filter (PF) in terms of number estimation and states extraction of multiple targets, a new probability hypothesis density filter algorithm based on marginalized particle and kernel density estimation is proposed, which utilizes the idea of marginalized particle filter to enhance the estimating performance of the PHD. The state variables are decomposed into linear and non-linear parts. The particle filter is adopted to predict and estimate the nonlinear states of multi-target after dimensionality reduction, while the Kalman filter is applied to estimate the linear parts under linear Gaussian condition. Embedding the information of the linear states into the estimated nonlinear states helps to reduce the estimating variance and improve the accuracy of target number estimation. The meanshift kernel density estimation, being of the inherent nature of searching peak value via an adaptive gradient ascent iteration, is introduced to cluster particles and extract target states, which is independent of the target number and can converge to the local peak position of the PHD distribution while avoiding the errors due to the inaccuracy in modeling and parameters estimation. Experiments show that the proposed algorithm can obtain higher tracking accuracy when using fewer sampling particles and is of lower computational complexity compared with the PF-PHD.

  19. Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows

    Science.gov (United States)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.

  20. Obtaining the Probability Vector Current Density in Canonical Quantum Mechanics by Linear Superposition

    CERN Document Server

    Kauffmann, Steven Kenneth

    2013-01-01

    The quantum mechanics status of the probability vector current density has long seemed to be marginal. On one hand no systematic prescription for its construction is provided, and the special examples of it that are obtained for particular types of Hamiltonian operator could conceivably be attributed to happenstance. On the other hand this concept's key physical interpretation as local average particle flux, which flows from the equation of continuity that it is supposed to satisfy in conjunction with the probability scalar density, has been claimed to breach the uncertainty principle. Given the dispiriting impact of that claim, we straightaway point out that the subtle directional nature of the uncertainty principle makes it consistent with the measurement of local average particle flux. We next focus on the fact that the unique closed-form linear-superposition quantization of any classical Hamiltonian function yields in tandem the corresponding unique linear-superposition closed-form divergence of the proba...

  1. Analytical formulation of the single-visit completeness joint probability density function

    CERN Document Server

    Garrett, Daniel

    2016-01-01

    We derive an exact formulation of the multivariate integral representing the single-visit obscurational and photometric completeness joint probability density function for arbitrary distributions for planetary parameters. We present a derivation of the region of nonzero values of this function which extends previous work, and discuss time and computational complexity costs and benefits of the method. We present a working implementation, and demonstrate excellent agreement between this approach and Monte Carlo simulation results

  2. Steady-state probability density function in wave turbulence under large volume limit

    Institute of Scientific and Technical Information of China (English)

    Yeontaek Choia; Sang Gyu Job

    2011-01-01

    We investigate the possibility for two-mode probability density function (PDF) to have a non-zero flux steady state solution. We take the large volume limit so that the space of modes becomes continuous. It is shown that in this limit all the steady-state two- or higher-mode PDFs are the product of one-mode PDFs. The flux of this steady-state solution turns out to be zero for any finite mode PDF.

  3. Energy Quantization and Probability Density of Electron in Intense-Field-Atom Interactions

    Institute of Scientific and Technical Information of China (English)

    敖淑艳; 程太旺; 李晓峰; 吴令安; 付盘铭

    2003-01-01

    We find that, due to the quantum correlation between the electron and the field, the electronic energy becomes quantized also, manifesting the particle aspect of light in the electron-light interaction. The probability amplitude of finding electron with a given energy is given by a generalized Bessel function, which can be represented as a coherent superposition of contributions from a few electronic quantum trajectories. This concept is illustrated by comparing the spectral density of the electron with the laser assisted recombination spectrum.

  4. Probability Density Function for Waves Propagating in a Straight PEC Rough Wall Tunnel

    Energy Technology Data Exchange (ETDEWEB)

    Pao, H

    2004-11-08

    The probability density function for wave propagating in a straight perfect electrical conductor (PEC) rough wall tunnel is deduced from the mathematical models of the random electromagnetic fields. The field propagating in caves or tunnels is a complex-valued Gaussian random processing by the Central Limit Theorem. The probability density function for single modal field amplitude in such structure is Ricean. Since both expected value and standard deviation of this field depend only on radial position, the probability density function, which gives what is the power distribution, is a radially dependent function. The radio channel places fundamental limitations on the performance of wireless communication systems in tunnels and caves. The transmission path between the transmitter and receiver can vary from a simple direct line of sight to one that is severely obstructed by rough walls and corners. Unlike wired channels that are stationary and predictable, radio channels can be extremely random and difficult to analyze. In fact, modeling the radio channel has historically been one of the more challenging parts of any radio system design; this is often done using statistical methods. In this contribution, we present the most important statistic property, the field probability density function, of wave propagating in a straight PEC rough wall tunnel. This work only studies the simplest case--PEC boundary which is not the real world but the methods and conclusions developed herein are applicable to real world problems which the boundary is dielectric. The mechanisms behind electromagnetic wave propagation in caves or tunnels are diverse, but can generally be attributed to reflection, diffraction, and scattering. Because of the multiple reflections from rough walls, the electromagnetic waves travel along different paths of varying lengths. The interactions between these waves cause multipath fading at any location, and the strengths of the waves decrease as the distance

  5. Unification of field theory and maximum entropy methods for learning probability densities.

    Science.gov (United States)

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  6. On the reliability of observational measurements of column density probability distribution functions

    CERN Document Server

    Ossenkopf, Volker; Schneider, Nicola; Federrath, Christoph; Klessen, Ralf S

    2016-01-01

    Probability distribution functions (PDFs) of column densities are an established tool to characterize the evolutionary state of interstellar clouds. Using simulations, we show to what degree their determination is affected by noise, line-of-sight contamination, field selection, and the incomplete sampling in interferometric measurements. We solve the integrals that describe the convolution of a cloud PDF with contaminating sources and study the impact of missing information on the measured column density PDF. The effect of observational noise can be easily estimated and corrected for if the root mean square (rms) of the noise is known. For $\\sigma_{noise}$ values below 40\\,\\% of the typical cloud column density, $N_{peak}$, this involves almost no degradation of the accuracy of the PDF parameters. For higher noise levels and narrow cloud PDFs the width of the PDF becomes increasingly uncertain. A contamination by turbulent foreground or background clouds can be removed as a constant shield if the PDF of the c...

  7. Comparative assessment of surface fluxes from different sources using probability density distributions

    Science.gov (United States)

    Gulev, Sergey; Tilinina, Natalia; Belyaev, Konstantin

    2015-04-01

    Surface turbulent heat fluxes from modern era and first generation reanalyses (NCEP-DOE, ERA-Interim, MERRA NCEP-CFSR, JRA) as well as from satellite products (SEAFLUX, IFREMER, HOAPS) were intercompared using framework of probability distributions for sensible and latent heat fluxes. For approximation of probability distributions and estimation of extreme flux values Modified Fisher-Tippett (MFT) distribution has been used. Besides mean flux values, consideration is given to the comparative analysis of (i) parameters of the MFT probability density functions (scale and location), (ii) extreme flux values corresponding high order percentiles of fluxes (e.g. 99th and higher) and (iii) fractional contribution of extreme surface flux events in the total surface turbulent fluxes integrated over months and seasons. The latter was estimated using both fractional distribution derived from MFT and empirical estimates based upon occurrence histograms. The strongest differences in the parameters of probability distributions of surface fluxes and extreme surface flux values between different reanalyses are found in the western boundary current extension regions and high latitudes, while the highest differences in the fractional contributions of surface fluxes may occur in mid ocean regions being closely associated with atmospheric synoptic dynamics. Generally, satellite surface flux products demonstrate relatively stronger extreme fluxes compared to reanalyses, even in the Northern Hemisphere midlatitudes where data assimilation input in reanalyses is quite dense compared to the Southern Ocean regions.

  8. Probability Density Function of the Output Current of Cascaded Multiplexer/Demultiplexers in Transparent Optical Networks

    Science.gov (United States)

    Rebola, João L.; Cartaxo, Adolfo V. T.

    The influence of the concatenation of arbitrary optical multiplexers/demultiplexers (MUX/DEMUXs) on the probability density function (PDF) of the output current of a transparent optical network is assessed. All PDF results obtained analytically are compared with estimates from Monte Carlo simulation and an excellent agreement is achieved. The non-Gaussian behavior of the PDFs, previously reported by other authors for square-law detectors, is significantly enhanced with the number of nodes increase due to the noise accumulation along the cascade of MUX/DEMUXs. The increase of the MUX/DEMUXs bandwidth and detuning also enhances the PDFs non-Gaussian behavior. The PDF shape variation with the detuning depends strongly on the number of nodes. Explanations for the Gaussian approximation (GA) accuracy on the assessment of the performance of a concatenation of optical MUX/DEMUXs are also provided. For infinite extinction ratio and tuned MUX/DEMUXs, the GA error probabilities are, in general, pessimistic, due to the inaccurate estimation of the error probability for both bits. For low extinction ratio, the GA is very accurate due to a balance between the error probabilities estimated for the bits "1" and "0." With the detuning increase, the GA estimates can become optimistic.

  9. Joint probability density function modeling of velocity and scalar in turbulence with unstructured grids

    CERN Document Server

    Bakosi, J; Boybeyi, Z

    2010-01-01

    In probability density function (PDF) methods a transport equation is solved numerically to compute the time and space dependent probability distribution of several flow variables in a turbulent flow. The joint PDF of the velocity components contains information on all one-point one-time statistics of the turbulent velocity field, including the mean, the Reynolds stresses and higher-order statistics. We developed a series of numerical algorithms to model the joint PDF of turbulent velocity, frequency and scalar compositions for high-Reynolds-number incompressible flows in complex geometries using unstructured grids. Advection, viscous diffusion and chemical reaction appear in closed form in the PDF formulation, thus require no closure hypotheses. The generalized Langevin model (GLM) is combined with an elliptic relaxation technique to represent the non-local effect of walls on the pressure redistribution and anisotropic dissipation of turbulent kinetic energy. The governing system of equations is solved fully...

  10. The Effect of Incremental Changes in Phonotactic Probability and Neighborhood Density on Word Learning by Preschool Children

    Science.gov (United States)

    Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon

    2013-01-01

    Purpose: Phonotactic probability or neighborhood density has predominately been defined through the use of gross distinctions (i.e., low vs. high). In the current studies, the authors examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method: The authors examined the full range of…

  11. Spline Histogram Method for Reconstruction of Probability Density Functions of Clusters of Galaxies

    Science.gov (United States)

    Docenko, Dmitrijs; Berzins, Karlis

    We describe the spline histogram algorithm which is useful for visualization of the probability density function setting up a statistical hypothesis for a test. The spline histogram is constructed from discrete data measurements using tensioned cubic spline interpolation of the cumulative distribution function which is then differentiated and smoothed using the Savitzky-Golay filter. The optimal width of the filter is determined by minimization of the Integrated Square Error function. The current distribution of the TCSplin algorithm written in f77 with IDL and Gnuplot visualization scripts is available from www.virac.lv/en/soft.html.

  12. Spline histogram method for reconstruction of probability density function of clusters of galaxies

    CERN Document Server

    Docenko, D; Docenko, Dmitrijs; Berzins, Karlis

    2003-01-01

    We describe the spline histogram algorithm which is useful for visualization of the probability density function setting up a statistical hypothesis for a test. The spline histogram is constructed from discrete data measurements using tensioned cubic spline interpolation of the cumulative distribution function which is then differentiated and smoothed using the Savitzky-Golay filter. The optimal width of the filter is determined by minimization of the Integrated Square Error function. The current distribution of the TCSplin algorithm written in f77 with IDL and Gnuplot visualization scripts is available from http://www.virac.lv/en/soft.html

  13. Spectral discrete probability density function of measured wind turbine noise in the far field.

    Science.gov (United States)

    Ashtiani, Payam; Denison, Adelaide

    2015-01-01

    Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources.

  14. Probability density function formalism for optical coherence tomography signal analysis: a controlled phantom study.

    Science.gov (United States)

    Weatherbee, Andrew; Sugita, Mitsuro; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex

    2016-06-15

    The distribution of backscattered intensities as described by the probability density function (PDF) of tissue-scattered light contains information that may be useful for tissue assessment and diagnosis, including characterization of its pathology. In this Letter, we examine the PDF description of the light scattering statistics in a well characterized tissue-like particulate medium using optical coherence tomography (OCT). It is shown that for low scatterer density, the governing statistics depart considerably from a Gaussian description and follow the K distribution for both OCT amplitude and intensity. The PDF formalism is shown to be independent of the scatterer flow conditions; this is expected from theory, and suggests robustness and motion independence of the OCT amplitude (and OCT intensity) PDF metrics in the context of potential biomedical applications.

  15. Understanding star formation in molecular clouds I. A universal probability distribution of column densities ?

    CERN Document Server

    Schneider, N; Csengeri, T; Klessen, R; Federrath, C; Tremblin, P; Girichidis, P; Bontemps, S; Andre, Ph

    2014-01-01

    Column density maps of molecular clouds are one of the most important observables in the context of molecular cloud- and star-formation (SF) studies. With Herschel it is now possible to reveal rather precisely the column density of dust, which is the best tracer of the bulk of material in molecular clouds. However, line-of-sight (LOS) contamination from fore- or background clouds can lead to an overestimation of the dust emission of molecular clouds, in particular for distant clouds. This implies too high values for column density and mass, and a misleading interpretation of probability distribution functions (PDFs) of the column density. In this paper, we demonstrate by using observations and simulations how LOS contamination affects the PDF. We apply a first-order approximation (removing a constant level) to the molecular clouds of Auriga and Maddalena (low-mass star-forming), and Carina and NGC3603(both high-mass SF regions). In perfect agreement with the simulations, we find that the PDFs become broader, ...

  16. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    Science.gov (United States)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  17. Dark matter density profiles of the halos embedding early-type galaxies: characterizing halo contraction and dark matter annihilation strength

    CERN Document Server

    Chae, Kyu-Hyun; Frieman, Joshua A; Bernardi, Mariangela

    2012-01-01

    Identifying dark matter and characterizing its distribution in the inner region of halos embedding galaxies are inter-related problems of broad importance. We devise a new procedure of determining dark matter distribution in halos. We first make a self-consistent bivariate statistical match of stellar mass and velocity dispersion with halo mass as demonstrated here for the first time. Then, selecting early-type galaxy-halo systems we perform Jeans dynamical modeling with the aid of observed statistical properties of stellar mass profiles and velocity dispersion profiles. Dark matter density profiles derived specifically using Sloan Digital Sky Survey galaxies and halos from up-to-date cosmological dissipationless simulations deviate significantly from the dissipationless profle of Navarro-Frenk-White or Einasto in terms of inner density slope and/or concentration. From these dark matter profiles we find that dark matter density is enhanced in the inner region of most early-type galactic halos providing an ind...

  18. Firing statistics of inhibitory neuron with delayed feedback. I. Output ISI probability density.

    Science.gov (United States)

    Vidybida, A K; Kravchuk, K G

    2013-06-01

    Activity of inhibitory neuron with delayed feedback is considered in the framework of point stochastic processes. The neuron receives excitatory input impulses from a Poisson stream, and inhibitory impulses from the feedback line with a delay. We investigate here, how does the presence of inhibitory feedback affect the output firing statistics. Using binding neuron (BN) as a model, we derive analytically the exact expressions for the output interspike intervals (ISI) probability density, mean output ISI and coefficient of variation as functions of model's parameters for the case of threshold 2. Using the leaky integrate-and-fire (LIF) model, as well as the BN model with higher thresholds, these statistical quantities are found numerically. In contrast to the previously studied situation of no feedback, the ISI probability densities found here both for BN and LIF neuron become bimodal and have discontinuity of jump type. Nevertheless, the presence of inhibitory delayed feedback was not found to affect substantially the output ISI coefficient of variation. The ISI coefficient of variation found ranges between 0.5 and 1. It is concluded that introduction of delayed inhibitory feedback can radically change neuronal output firing statistics. This statistics is as well distinct from what was found previously (Vidybida and Kravchuk, 2009) by a similar method for excitatory neuron with delayed feedback.

  19. Analysis of Observation Data of Earth-Rockfill Dam Based on Cloud Probability Distribution Density Algorithm

    Directory of Open Access Journals (Sweden)

    Han Liwei

    2014-07-01

    Full Text Available Monitoring data on an earth-rockfill dam constitutes a form of spatial data. Such data include much uncertainty owing to the limitation of measurement information, material parameters, load, geometry size, initial conditions, boundary conditions and the calculation model. So the cloud probability density of the monitoring data must be addressed. In this paper, the cloud theory model was used to address the uncertainty transition between the qualitative concept and the quantitative description. Then an improved algorithm of cloud probability distribution density based on a backward cloud generator was proposed. This was used to effectively convert certain parcels of accurate data into concepts which can be described by proper qualitative linguistic values. Such qualitative description was addressed as cloud numerical characteristics-- {Ex, En, He}, which could represent the characteristics of all cloud drops. The algorithm was then applied to analyze the observation data of a piezometric tube in an earth-rockfill dam. And experiment results proved that the proposed algorithm was feasible, through which, we could reveal the changing regularity of piezometric tube’s water level. And the damage of the seepage in the body was able to be found out.

  20. A Priori Knowledge and Probability Density Based Segmentation Method for Medical CT Image Sequences

    Directory of Open Access Journals (Sweden)

    Huiyan Jiang

    2014-01-01

    Full Text Available This paper briefly introduces a novel segmentation strategy for CT images sequences. As first step of our strategy, we extract a priori intensity statistical information from object region which is manually segmented by radiologists. Then we define a search scope for object and calculate probability density for each pixel in the scope using a voting mechanism. Moreover, we generate an optimal initial level set contour based on a priori shape of object of previous slice. Finally the modified distance regularity level set method utilizes boundaries feature and probability density to conform final object. The main contributions of this paper are as follows: a priori knowledge is effectively used to guide the determination of objects and a modified distance regularization level set method can accurately extract actual contour of object in a short time. The proposed method is compared to other seven state-of-the-art medical image segmentation methods on abdominal CT image sequences datasets. The evaluated results demonstrate our method performs better and has the potential for segmentation in CT image sequences.

  1. Using Prediction Markets to Generate Probability Density Functions for Climate Change Risk Assessment

    Science.gov (United States)

    Boslough, M.

    2011-12-01

    Climate-related uncertainty is traditionally presented as an error bar, but it is becoming increasingly common to express it in terms of a probability density function (PDF). PDFs are a necessary component of probabilistic risk assessments, for which simple "best estimate" values are insufficient. Many groups have generated PDFs for climate sensitivity using a variety of methods. These PDFs are broadly consistent, but vary significantly in their details. One axiom of the verification and validation community is, "codes don't make predictions, people make predictions." This is a statement of the fact that subject domain experts generate results using assumptions within a range of epistemic uncertainty and interpret them according to their expert opinion. Different experts with different methods will arrive at different PDFs. For effective decision support, a single consensus PDF would be useful. We suggest that market methods can be used to aggregate an ensemble of opinions into a single distribution that expresses the consensus. Prediction markets have been shown to be highly successful at forecasting the outcome of events ranging from elections to box office returns. In prediction markets, traders can take a position on whether some future event will or will not occur. These positions are expressed as contracts that are traded in a double-action market that aggregates price, which can be interpreted as a consensus probability that the event will take place. Since climate sensitivity cannot directly be measured, it cannot be predicted. However, the changes in global mean surface temperature are a direct consequence of climate sensitivity, changes in forcing, and internal variability. Viable prediction markets require an undisputed event outcome on a specific date. Climate-related markets exist on Intrade.com, an online trading exchange. One such contract is titled "Global Temperature Anomaly for Dec 2011 to be greater than 0.65 Degrees C." Settlement is based

  2. Momentum Probabilities for a Single Quantum Particle in Three-Dimensional Regular "Infinite" Wells: One Way of Promoting Understanding of Probability Densities

    Science.gov (United States)

    Riggs, Peter J.

    2013-01-01

    Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…

  3. A probability density function of liftoff velocities in mixed-size wind sand flux

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    With the discrete element method(DEM) ,employing the diameter distribution of natural sands sampled from the Tengger Desert,a mixed-size sand bed was produced and the particle-bed collision was simulated in the mixed-size wind sand movement. In the simulation,the shear wind velocity,particle diameter,incident velocity and incident angle of the impact sand particle were given the same values as the experimental results. After the particle-bed collision,we collected all the initial velocities of rising sand particles,including the liftoff angular velocities,liftoff linear velocities and their horizontal and vertical components. By the statistical analysis on the velocity sample for each velocity component,its probability density functions were obtained,and they are the functions of the shear wind velocity. The liftoff velocities and their horizontal and vertical components are distributed as an exponential density function,while the angular velocities are distributed as a normal density function.

  4. Particle filters for probability hypothesis density filter with the presence of unknown measurement noise covariance

    Institute of Scientific and Technical Information of China (English)

    Wu Xinhui; Huang Gaoming; Gao Jun

    2013-01-01

    In Bayesian multi-target filtering, knowledge of measurement noise variance is very important. Significant mismatches in noise parameters will result in biased estimates. In this paper, a new particle filter for a probability hypothesis density (PHD) filter handling unknown measure-ment noise variances is proposed. The approach is based on marginalizing the unknown parameters out of the posterior distribution by using variational Bayesian (VB) methods. Moreover, the sequential Monte Carlo method is used to approximate the posterior intensity considering non-lin-ear and non-Gaussian conditions. Unlike other particle filters for this challenging class of PHD fil-ters, the proposed method can adaptively learn the unknown and time-varying noise variances while filtering. Simulation results show that the proposed method improves estimation accuracy in terms of both the number of targets and their states.

  5. The use of the compound probability density function in ultrasonic tissue characterization

    Energy Technology Data Exchange (ETDEWEB)

    Shankar, P M [Department of Electrical and Computer Engineering, Drexel University, 3141 Chestnut Street, Philadelphia, PA 19104 (United States)

    2004-03-21

    Recently, a compound probability density function (pdf) was proposed to model the envelope of the ultrasonic backscattered echo from tissues. This pdf will allow local and global variations in scattering cross sections and even multiple scattering in tissue. It approximates to the Nakagami, K or Rayleigh distributions under different limiting conditions, thus making it very versatile. In this work, a new parameter associated with compound pdf, speckle factor, has been introduced to characterize the scattering conditions. The usefulness of this parameter for tissue characterization has been explored through computer simulation of ultrasonic A scans and analyses of the data collected from tissue-mimicking phantoms. Results suggest the potential applications of the compound pdf and its parameters in ultrasonic tissue characterization.

  6. Evaluate the Word Error Rate of Binary Block Codes with Square Radius Probability Density Function

    CERN Document Server

    Chen, Xiaogang; Gu, Jian; Yang, Hongkui

    2007-01-01

    The word error rate (WER) of soft-decision-decoded binary block codes rarely has closed-form. Bounding techniques are widely used to evaluate the performance of maximum-likelihood decoding algorithm. But the existing bounds are not tight enough especially for low signal-to-noise ratios and become looser when a suboptimum decoding algorithm is used. This paper proposes a new concept named square radius probability density function (SR-PDF) of decision region to evaluate the WER. Based on the SR-PDF, The WER of binary block codes can be calculated precisely for ML and suboptimum decoders. Furthermore, for a long binary block code, SR-PDF can be approximated by Gamma distribution with only two parameters that can be measured easily. Using this property, two closed-form approximative expressions are proposed which are very close to the simulation results of the WER of interesting.

  7. Analytical computation of the magnetization probability density function for the harmonic 2D XY model

    CERN Document Server

    Palma, G

    2009-01-01

    The probability density function (PDF) of some global average quantity plays a fundamental role in critical and highly correlated systems. We explicitly compute this quantity as a function of the magnetization for the two dimensional XY model in its harmonic approximation. Numerical simulations and perturbative results have shown a Gumbel-like shape of the PDF, in spite of the fact that the average magnetization is not an extreme variable. Our analytical result allows to test both perturbative analytical expansions and also numerical computations performed previously. Perfect agreement is found for the first moments of the PDF. Also for large volume and in the high temperature limit the distribution becomes Gaussian, as it should be. In the low temperature regime its numerical evaluation is compatible with a Gumbel distribution.

  8. METAPHOR: A machine learning based method for the probability density estimation of photometric redshifts

    CERN Document Server

    Cavuoti, Stefano; Brescia, Massimo; Vellucci, Civita; Tortora, Crescenzo; Longo, Giuseppe

    2016-01-01

    A variety of fundamental astrophysical science topics require the determination of very accurate photometric redshifts (photo-z's). A wide plethora of methods have been developed, based either on template models fitting or on empirical explorations of the photometric parameter space. Machine learning based techniques are not explicitly dependent on the physical priors and able to produce accurate photo-z estimations within the photometric ranges derived from the spectroscopic training set. These estimates, however, are not easy to characterize in terms of a photo-z Probability Density Function (PDF), due to the fact that the analytical relation mapping the photometric parameters onto the redshift space is virtually unknown. We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method designed to provide a reliable PDF of the error distribution for empirical techniques. The method is implemented as a modular workflow, whose internal engine for photo-z estimation makes use...

  9. ANNz2 - Photometric redshift and probability density function estimation using machine learning methods

    CERN Document Server

    Sadeh, Iftach; Lahav, Ofer

    2015-01-01

    We present ANNz2, a new implementation of the public software for photometric redshift (photo-z) estimation of Collister and Lahav (2004). Large photometric galaxy surveys are important for cosmological studies, and in particular for characterizing the nature of dark energy. The success of such surveys greatly depends on the ability to measure photo-zs, based on limited spectral data. ANNz2 utilizes multiple machine learning methods, such as artificial neural networks, boosted decision/regression trees and k-nearest neighbours. The objective of the algorithm is to dynamically optimize the performance of the photo-z estimation, and to properly derive the associated uncertainties. In addition to single-value solutions, the new code also generates full probability density functions (PDFs) in two different ways. In addition, estimators are incorporated to mitigate possible problems of spectroscopic training samples which are not representative or are incomplete. ANNz2 is also adapted to provide optimized solution...

  10. Probability Density Components Analysis: A New Approach to Treatment and Classification of SAR Images

    Directory of Open Access Journals (Sweden)

    Osmar Abílio de Carvalho Júnior

    2014-04-01

    Full Text Available Speckle noise (salt and pepper is inherent to synthetic aperture radar (SAR, which causes a usual noise-like granular aspect and complicates the image classification. In SAR image analysis, the spatial information might be a particular benefit for denoising and mapping classes characterized by a statistical distribution of the pixel intensities from a complex and heterogeneous spectral response. This paper proposes the Probability Density Components Analysis (PDCA, a new alternative that combines filtering and frequency histogram to improve the classification procedure for the single-channel synthetic aperture radar (SAR images. This method was tested on L-band SAR data from the Advanced Land Observation System (ALOS Phased-Array Synthetic-Aperture Radar (PALSAR sensor. The study area is localized in the Brazilian Amazon rainforest, northern Rondônia State (municipality of Candeias do Jamari, containing forest and land use patterns. The proposed algorithm uses a moving window over the image, estimating the probability density curve in different image components. Therefore, a single input image generates an output with multi-components. Initially the multi-components should be treated by noise-reduction methods, such as maximum noise fraction (MNF or noise-adjusted principal components (NAPCs. Both methods enable reducing noise as well as the ordering of multi-component data in terms of the image quality. In this paper, the NAPC applied to multi-components provided large reductions in the noise levels, and the color composites considering the first NAPC enhance the classification of different surface features. In the spectral classification, the Spectral Correlation Mapper and Minimum Distance were used. The results obtained presented as similar to the visual interpretation of optical images from TM-Landsat and Google Maps.

  11. Fading probability density function of free-space optical communication channels with pointing error

    Science.gov (United States)

    Zhao, Zhijun; Liao, Rui

    2011-06-01

    The turbulent atmosphere causes wavefront distortion, beam wander, and beam broadening of a laser beam. These effects result in average power loss and instantaneous power fading at the receiver aperture and thus degrade performance of a free-space optical (FSO) communication system. In addition to the atmospheric turbulence, a FSO communication system may also suffer from laser beam pointing error. The pointing error causes excessive power loss and power fading. This paper proposes and studies an analytical method for calculating the FSO channel fading probability density function (pdf) induced by both atmospheric turbulence and pointing error. This method is based on the fast-tracked laser beam fading profile and the joint effects of beam wander and pointing error. In order to evaluate the proposed analytical method, large-scale numerical wave-optics simulations are conducted. Three types of pointing errors are studied , namely, the Gaussian random pointing error, the residual tracking error, and the sinusoidal sway pointing error. The FSO system employs a collimated Gaussian laser beam propagating along a horizontal path. The propagation distances range from 0.25 miles to 2.5 miles. The refractive index structure parameter is chosen to be Cn2 = 5×10-15m-2/3 and Cn2 = 5×10-13m-2/3. The studied cases cover from weak to strong fluctuations. The fading pdf curves of channels with pointing error calculated using the analytical method match accurately the corresponding pdf curves obtained directly from large-scale wave-optics simulations. They also give accurate average bit-error-rate (BER) curves and outage probabilities. Both the lognormal and the best-fit gamma-gamma fading pdf curves deviate from those of corresponding simulation curves, and they produce overoptimistic average BER curves and outage probabilities.

  12. Probability Density Function Characterization for Aggregated Large-Scale Wind Power Based on Weibull Mixtures

    Directory of Open Access Journals (Sweden)

    Emilio Gómez-Lázaro

    2016-02-01

    Full Text Available The Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF for aggregated wind power generation. PDFs of wind power data are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC and the Bayesian information criterion (BIC. Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.

  13. Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.

    2011-05-15

    Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.

  14. Development and evaluation of probability density functions for a set of human exposure factors

    Energy Technology Data Exchange (ETDEWEB)

    Maddalena, R.L.; McKone, T.E.; Bodnar, A.; Jacobson, J.

    1999-06-01

    The purpose of this report is to describe efforts carried out during 1998 and 1999 at the Lawrence Berkeley National Laboratory to assist the U.S. EPA in developing and ranking the robustness of a set of default probability distributions for exposure assessment factors. Among the current needs of the exposure-assessment community is the need to provide data for linking exposure, dose, and health information in ways that improve environmental surveillance, improve predictive models, and enhance risk assessment and risk management (NAS, 1994). The U.S. Environmental Protection Agency (EPA) Office of Emergency and Remedial Response (OERR) plays a lead role in developing national guidance and planning future activities that support the EPA Superfund Program. OERR is in the process of updating its 1989 Risk Assessment Guidance for Superfund (RAGS) as part of the EPA Superfund reform activities. Volume III of RAGS, when completed in 1999 will provide guidance for conducting probabilistic risk assessments. This revised document will contain technical information including probability density functions (PDFs) and methods used to develop and evaluate these PDFs. The PDFs provided in this EPA document are limited to those relating to exposure factors.

  15. Robust functional statistics applied to Probability Density Function shape screening of sEMG data.

    Science.gov (United States)

    Boudaoud, S; Rix, H; Al Harrach, M; Marin, F

    2014-01-01

    Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.

  16. An empirical method for estimating probability density functions of gridded daily minimum and maximum temperature

    Science.gov (United States)

    Lussana, C.

    2013-04-01

    The presented work focuses on the investigation of gridded daily minimum (TN) and maximum (TX) temperature probability density functions (PDFs) with the intent of both characterising a region and detecting extreme values. The empirical PDFs estimation procedure has been realised using the most recent years of gridded temperature analysis fields available at ARPA Lombardia, in Northern Italy. The spatial interpolation is based on an implementation of Optimal Interpolation using observations from a dense surface network of automated weather stations. An effort has been made to identify both the time period and the spatial areas with a stable data density otherwise the elaboration could be influenced by the unsettled station distribution. The PDF used in this study is based on the Gaussian distribution, nevertheless it is designed to have an asymmetrical (skewed) shape in order to enable distinction between warming and cooling events. Once properly defined the occurrence of extreme events, it is possible to straightforwardly deliver to the users the information on a local-scale in a concise way, such as: TX extremely cold/hot or TN extremely cold/hot.

  17. Entrainment Rate in Shallow Cumuli: Dependence on Entrained Dry Air Sources and Probability Density Functions

    Science.gov (United States)

    Lu, C.; Liu, Y.; Niu, S.; Vogelmann, A. M.

    2012-12-01

    In situ aircraft cumulus observations from the RACORO field campaign are used to estimate entrainment rate for individual clouds using a recently developed mixing fraction approach. The entrainment rate is computed based on the observed state of the cloud core and the state of the air that is laterally mixed into the cloud at its edge. The computed entrainment rate decreases when the air is entrained from increasing distance from the cloud core edge; this is because the air farther away from cloud edge is drier than the neighboring air that is within the humid shells around cumulus clouds. Probability density functions of entrainment rate are well fitted by lognormal distributions at different heights above cloud base for different dry air sources (i.e., different source distances from the cloud core edge). Such lognormal distribution functions are appropriate for inclusion into future entrainment rate parameterization in large scale models. To the authors' knowledge, this is the first time that probability density functions of entrainment rate have been obtained in shallow cumulus clouds based on in situ observations. The reason for the wide spread of entrainment rate is that the observed clouds are affected by entrainment mixing processes to different extents, which is verified by the relationships between the entrainment rate and cloud microphysics/dynamics. The entrainment rate is negatively correlated with liquid water content and cloud droplet number concentration due to the dilution and evaporation in entrainment mixing processes. The entrainment rate is positively correlated with relative dispersion (i.e., ratio of standard deviation to mean value) of liquid water content and droplet size distributions, consistent with the theoretical expectation that entrainment mixing processes are responsible for microphysics fluctuations and spectral broadening. The entrainment rate is negatively correlated with vertical velocity and dissipation rate because entrainment

  18. Representation of layer-counted proxy records as probability densities on error-free time axes

    Science.gov (United States)

    Boers, Niklas; Goswami, Bedartha; Ghil, Michael

    2016-04-01

    Time series derived from paleoclimatic proxy records exhibit substantial dating uncertainties in addition to the measurement errors of the proxy values. For radiometrically dated proxy archives, Goswami et al. [1] have recently introduced a framework rooted in Bayesian statistics that successfully propagates the dating uncertainties from the time axis to the proxy axis. The resulting proxy record consists of a sequence of probability densities over the proxy values, conditioned on prescribed age values. One of the major benefits of this approach is that the proxy record is represented on an accurate, error-free time axis. Such unambiguous dating is crucial, for instance, in comparing different proxy records. This approach, however, is not directly applicable to proxy records with layer-counted chronologies, as for example ice cores, which are typically dated by counting quasi-annually deposited ice layers. Hence the nature of the chronological uncertainty in such records is fundamentally different from that in radiometrically dated ones. Here, we introduce a modification of the Goswami et al. [1] approach that is specifically designed for layer-counted proxy records, instead of radiometrically dated ones. We apply our method to isotope ratios and dust concentrations in the NGRIP core, using a published 60,000-year chronology [2]. It is shown that the further one goes into the past, the more the layer-counting errors accumulate and lead to growing uncertainties in the probability density sequence for the proxy values that results from the proposed approach. For the older parts of the record, these uncertainties affect more and more a statistically sound estimation of proxy values. This difficulty implies that great care has to be exercised when comparing and in particular aligning specific events among different layer-counted proxy records. On the other hand, when attempting to derive stochastic dynamical models from the proxy records, one is only interested in the

  19. Probability density function and estimation for error of digitized map coordinates in GIS

    Institute of Scientific and Technical Information of China (English)

    童小华; 刘大杰

    2004-01-01

    Traditionally, it is widely accepted that measurement error usually obeys the normal distribution. However, in this paper a new idea is proposed that the error in digitized data which is a major derived data source in GIS does not obey the normal distribution but the p-norm distribution with a determinate parameter. Assuming that the error is random and has the same statistical properties, the probability density function of the normal distribution,Laplace distribution and p-norm distribution are derived based on the arithmetic mean axiom, median axiom and pmedian axiom, which means that the normal distribution is only one of these distributions but not the least one.Based on this idea, distribution fitness tests such as Skewness and Kurtosis coefficient test, Pearson chi-square x2 test and Kolmogorov test for digitized data are conducted. The results show that the error in map digitization obeys the p-norm distribution whose parameter is close to 1.60. A least p-norm estimation and the least square estimation of digitized data are further analyzed, showing that the least p-norm adiustment is better than the least square adjustment for digitized data processing in GIS.

  20. Scaling of maximum probability density functions of velocity and temperature increments in turbulent systems

    CERN Document Server

    Huang, Y X; Zhou, Q; Qiu, X; Shang, X D; Lu, Z M; Liu, and Y L

    2014-01-01

    In this paper, we introduce a new way to estimate the scaling parameter of a self-similar process by considering the maximum probability density function (pdf) of tis increments. We prove this for $H$-self-similar processes in general and experimentally investigate it for turbulent velocity and temperature increments. We consider turbulent velocity database from an experimental homogeneous and nearly isotropic turbulent channel flow, and temperature data set obtained near the sidewall of a Rayleigh-B\\'{e}nard convection cell, where the turbulent flow is driven by buoyancy. For the former database, it is found that the maximum value of increment pdf $p_{\\max}(\\tau)$ is in a good agreement with lognormal distribution. We also obtain a scaling exponent $\\alpha\\simeq 0.37$, which is consistent with the scaling exponent for the first-order structure function reported in other studies. For the latter one, we obtain a scaling exponent $\\alpha_{\\theta}\\simeq0.33$. This index value is consistent with the Kolmogorov-Ob...

  1. Vertical overlap of probability density functions of cloud and precipitation hydrometeors: CLOUD AND PRECIPITATION PDF OVERLAP

    Energy Technology Data Exchange (ETDEWEB)

    Ovchinnikov, Mikhail [Pacific Northwest National Laboratory, Richland Washington USA; Lim, Kyo-Sun Sunny [Pacific Northwest National Laboratory, Richland Washington USA; Korea Atomic Energy Research Institute, Daejeon Republic of Korea; Larson, Vincent E. [Department of Mathematical Sciences, University of Wisconsin-Milwaukee, Milwaukee Wisconsin USA; Wong, May [Pacific Northwest National Laboratory, Richland Washington USA; National Center for Atmospheric Research, Boulder Colorado USA; Thayer-Calder, Katherine [National Center for Atmospheric Research, Boulder Colorado USA; Ghan, Steven J. [Pacific Northwest National Laboratory, Richland Washington USA

    2016-11-05

    Coarse-resolution climate models increasingly rely on probability density functions (PDFs) to represent subgrid-scale variability of prognostic variables. While PDFs characterize the horizontal variability, a separate treatment is needed to account for the vertical structure of clouds and precipitation. When sub-columns are drawn from these PDFs for microphysics or radiation parameterizations, appropriate vertical correlations must be enforced via PDF overlap specifications. This study evaluates the representation of PDF overlap in the Subgrid Importance Latin Hypercube Sampler (SILHS) employed in the assumed PDF turbulence and cloud scheme called the Cloud Layers Unified By Binormals (CLUBB). PDF overlap in CLUBB-SILHS simulations of continental and tropical oceanic deep convection is compared with overlap of PDF of various microphysics variables in cloud-resolving model (CRM) simulations of the same cases that explicitly predict the 3D structure of cloud and precipitation fields. CRM results show that PDF overlap varies significantly between different hydrometeor types, as well as between PDFs of mass and number mixing ratios for each species, - a distinction that the current SILHS implementation does not make. In CRM simulations that explicitly resolve cloud and precipitation structures, faster falling species, such as rain and graupel, exhibit significantly higher coherence in their vertical distributions than slow falling cloud liquid and ice. These results suggest that to improve the overlap treatment in the sub-column generator, the PDF correlations need to depend on hydrometeor properties, such as fall speeds, in addition to the currently implemented dependency on the turbulent convective length scale.

  2. On the method of logarithmic cumulants for parametric probability density function estimation.

    Science.gov (United States)

    Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

    2013-10-01

    Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible.

  3. Incorporating Photometric Redshift Probability Density Information into Real-Space Clustering Measurements

    CERN Document Server

    Myers, Adam D; Ball, Nicholas M

    2009-01-01

    The use of photometric redshifts in cosmology is increasing. Often, however these photo-zs are treated like spectroscopic observations, in that the peak of the photometric redshift, rather than the full probability density function (PDF), is used. This overlooks useful information inherent in the full PDF. We introduce a new real-space estimator for one of the most used cosmological statistics, the 2-point correlation function, that weights by the PDF of individual photometric objects in a manner that is optimal when Poisson statistics dominate. As our estimator does not bin based on the PDF peak it substantially enhances the clustering signal by usefully incorporating information from all photometric objects that overlap the redshift bin of interest. As a real-world application, we measure QSO clustering in the Sloan Digital Sky Survey (SDSS) and find that our estimator improves the clustering signal by a factor equivalent to increasing the survey size by a factor of 2 to 3. Our technique uses spectroscopic ...

  4. Probability density functions of the average and difference intensities of Friedel opposites.

    Science.gov (United States)

    Shmueli, U; Flack, H D

    2010-11-01

    Trigonometric series for the average (A) and difference (D) intensities of Friedel opposites were carefully rederived and were normalized to minimize their dependence on sin(theta)/lambda. Probability density functions (hereafter p.d.f.s) of these series were then derived by the Fourier method [Shmueli, Weiss, Kiefer & Wilson (1984). Acta Cryst. A40, 651-660] and their expressions, which admit any chemical composition of the unit-cell contents, were obtained for the space group P1. Histograms of A and D were then calculated for an assumed random-structure model and for 3135 Friedel pairs of a published solved crystal structure, and were compared with the p.d.f.s after the latter were scaled up to the histograms. Good agreement was obtained for the random-structure model and a qualitative one for the published solved structure. The results indicate that the residual discrepancy is mainly due to the presumed statistical independence of the p.d.f.'s characteristic function on the contributions of the interatomic vectors.

  5. Representation of Probability Density Functions from Orbit Determination using the Particle Filter

    Science.gov (United States)

    Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell

    2012-01-01

    Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.

  6. A measurement-driven adaptive probability hypothesis density filter for multitarget tracking

    Institute of Scientific and Technical Information of China (English)

    Si Weijian; Wang Liwei; Qu Zhiyu

    2015-01-01

    This paper studies the dynamic estimation problem for multitarget tracking. A novel gat-ing strategy that is based on the measurement likelihood of the target state space is proposed to improve the overall effectiveness of the probability hypothesis density (PHD) filter. Firstly, a measurement-driven mechanism based on this gating technique is designed to classify the measure-ments. In this mechanism, only the measurements for the existing targets are considered in the update step of the existing targets while the measurements of newborn targets are used for exploring newborn targets. Secondly, the gating strategy enables the development of a heuristic state estima-tion algorithm when sequential Monte Carlo (SMC) implementation of the PHD filter is investi-gated, where the measurements are used to drive the particle clustering within the space gate. The resulting PHD filter can achieve a more robust and accurate estimation of the existing targets by reducing the interference from clutter. Moreover, the target birth intensity can be adaptive to detect newborn targets, which is in accordance with the birth measurements. Simulation results demonstrate the computational efficiency and tracking performance of the proposed algorithm. ? 2015 The Authors. Production and hosting by Elsevier Ltd. on behalf of CSAA&BUAA. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

  7. Comparison of Anger camera and BGO mosaic position-sensitive detectors for `Super ACAR`. Precision electron momentum densities via angular correlation of annihilation radiation

    Energy Technology Data Exchange (ETDEWEB)

    Mills, A.P. Jr. [Bell Labs. Murray Hill, NJ (United States); West, R.N.; Hyodo, Toshio

    1997-03-01

    We discuss the relative merits of Anger cameras and Bismuth Germanate mosaic counters for measuring the angular correlation of positron annihilation radiation at a facility such as the proposed Positron Factory at Takasaki. The two possibilities appear equally cost effective at this time. (author)

  8. The role of presumed probability density functions in the simulation of nonpremixed turbulent combustion

    Science.gov (United States)

    Coclite, A.; Pascazio, G.; De Palma, P.; Cutrone, L.

    2016-07-01

    Flamelet-Progress-Variable (FPV) combustion models allow the evaluation of all thermochemical quantities in a reacting flow by computing only the mixture fraction Z and a progress variable C. When using such a method to predict turbulent combustion in conjunction with a turbulence model, a probability density function (PDF) is required to evaluate statistical averages (e. g., Favre averages) of chemical quantities. The choice of the PDF is a compromise between computational costs and accuracy level. The aim of this paper is to investigate the influence of the PDF choice and its modeling aspects to predict turbulent combustion. Three different models are considered: the standard one, based on the choice of a β-distribution for Z and a Dirac-distribution for C; a model employing a β-distribution for both Z and C; and the third model obtained using a β-distribution for Z and the statistically most likely distribution (SMLD) for C. The standard model, although widely used, does not take into account the interaction between turbulence and chemical kinetics as well as the dependence of the progress variable not only on its mean but also on its variance. The SMLD approach establishes a systematic framework to incorporate informations from an arbitrary number of moments, thus providing an improvement over conventionally employed presumed PDF closure models. The rational behind the choice of the three PDFs is described in some details and the prediction capability of the corresponding models is tested vs. well-known test cases, namely, the Sandia flames, and H2-air supersonic combustion.

  9. METAPHOR: a machine-learning-based method for the probability density estimation of photometric redshifts

    Science.gov (United States)

    Cavuoti, S.; Amaro, V.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.

    2017-02-01

    A variety of fundamental astrophysical science topics require the determination of very accurate photometric redshifts (photo-z). A wide plethora of methods have been developed, based either on template models fitting or on empirical explorations of the photometric parameter space. Machine-learning-based techniques are not explicitly dependent on the physical priors and able to produce accurate photo-z estimations within the photometric ranges derived from the spectroscopic training set. These estimates, however, are not easy to characterize in terms of a photo-z probability density function (PDF), due to the fact that the analytical relation mapping the photometric parameters on to the redshift space is virtually unknown. We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method designed to provide a reliable PDF of the error distribution for empirical techniques. The method is implemented as a modular workflow, whose internal engine for photo-z estimation makes use of the MLPQNA neural network (Multi Layer Perceptron with Quasi Newton learning rule), with the possibility to easily replace the specific machine-learning model chosen to predict photo-z. We present a summary of results on SDSS-DR9 galaxy data, used also to perform a direct comparison with PDFs obtained by the LE PHARE spectral energy distribution template fitting. We show that METAPHOR is capable to estimate the precision and reliability of photometric redshifts obtained with three different self-adaptive techniques, i.e. MLPQNA, Random Forest and the standard K-Nearest Neighbors models.

  10. Supernovae and Positron Annihilation

    CERN Document Server

    Milne, P A; Kinzer, R L; Leising, M D

    2002-01-01

    Radioactive nuclei, especially those created in SN explosion, have long been suggested to be important contributors of galactic positrons. In this paper we describe the findings of three independent OSSE/SMM/TGRS studies of positron annihilation radiation, demonstrating that the three studies are largely in agreement as to the distribution of galactic annihilation radiation. We then assess the predicted yields and distributions of SN-synthesized radionuclei, determining that they are marginally compatible with the findings of the annihilation radiation studies.

  11. Two-particle anomalous diffusion: probability density functions and self-similar stochastic processes.

    Science.gov (United States)

    Pagnini, Gianni; Mura, Antonio; Mainardi, Francesco

    2013-05-13

    Two-particle dispersion is investigated in the context of anomalous diffusion. Two different modelling approaches related to time subordination are considered and unified in the framework of self-similar stochastic processes. By assuming a single-particle fractional Brownian motion and that the two-particle correlation function decreases in time with a power law, the particle relative separation density is computed for the cases with time sub-ordination directed by a unilateral M-Wright density and by an extremal Lévy stable density. Looking for advisable mathematical properties (for instance, the stationarity of the increments), the corresponding self-similar stochastic processes are represented in terms of fractional Brownian motions with stochastic variance, whose profile is modelled by using the M-Wright density or the Lévy stable density.

  12. Probability density function of non-reactive solute concentration in heterogeneous porous formations.

    Science.gov (United States)

    Bellin, Alberto; Tonina, Daniele

    2007-10-30

    Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for the local concentration of conservative tracers migrating in heterogeneous aquifers. Our model accounts for dilution, mechanical mixing within the sampling volume and spreading due to formation heterogeneity. It is developed by modeling local concentration dynamics with an Ito Stochastic Differential Equation (SDE) that under the hypothesis of statistical stationarity leads to the Beta probability distribution function (pdf) for the solute concentration. This model shows large flexibility in capturing the smoothing effect of the sampling volume and the associated reduction of the probability of exceeding large concentrations. Furthermore, it is fully characterized by the first two moments of the solute concentration, and these are the same pieces of information required for standard geostatistical techniques employing Normal or Log-Normal distributions. Additionally, we show that in the absence of pore-scale dispersion and for point concentrations the pdf model converges to the binary distribution of [Dagan, G., 1982. Stochastic modeling of groundwater flow by unconditional and conditional probabilities, 2, The solute transport. Water Resour. Res. 18 (4), 835-848.], while it approaches the Normal distribution for sampling volumes much larger than the characteristic scale of the aquifer heterogeneity. Furthermore, we demonstrate that the same model with the spatial moments replacing the statistical moments can be applied to estimate the proportion of the plume volume where solute concentrations are above or below critical thresholds. Application of this model to point and vertically averaged bromide

  13. Evolution characteristics of the precipitation in the Yangtze River delta based on the probability density

    Institute of Scientific and Technical Information of China (English)

    Zeng Hong-Ling; Gao Xin-Quan; Zhang Wen

    2005-01-01

    In this paper the dryness/wetness (DW) grade data of the Yangtze River delta is transformed into the temporal evolution of precipitation probability (PP), and its hierarchically distributive characters are revealed. Research results show that the precipitation of the Yangtze River delta displays the interannual, interdecadal, as well as longer quasiperiodic changes than a century, and all the periods have a confidence level of more than 0.05. In the DW grade series of 530 years, although the frequency of the small probability events (SPEs) of drought/flood in each area of the Yangtze River delta is different, the frequency of the SPEs triggered by the climatic background state is yet the same. This research result fully shows the significant impact of the climatic evolution as a background state upon the occurrence of SPEs, which will be instructive in climatic prediction theory and in raising the accuracy of the climatic prediction.

  14. Protein distance constraints predicted by neural networks and probability density functions

    DEFF Research Database (Denmark)

    Lund, Ole; Frimand, Kenneth; Gorodkin, Jan;

    1997-01-01

    We predict interatomic C-α distances by two independent data driven methods. The first method uses statistically derived probability distributions of the pairwise distance between two amino acids, whilst the latter method consists of a neural network prediction approach equipped with windows taki...... method based on the predicted distances is presented. A homepage with software, predictions and data related to this paper is available at http://www.cbs.dtu.dk/services/CPHmodels/...

  15. Simulation Of Wave Function And Probability Density Of Modified Poschl Teller Potential Derived Using Supersymmetric Quantum Mechanics

    Science.gov (United States)

    Angraini, Lily Maysari; Suparmi, Variani, Viska Inda

    2010-12-01

    SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.

  16. Complex Probability Distributions A Solution for the Long-Standing Problem of QCD at Finite Density

    CERN Document Server

    Azcoiti, V

    1996-01-01

    We show how the prescription of taking the absolute value of the fermion determinant in the integration measure of QCD at finite density, forgetting its phase, reproduces the correct thermodynamical limit. This prescription, which applies also to other gauge theories with non-positive-definite integration measure, also has the advantage of killing finite size effects due to extremely small mean values of the cosine of the phase of the fermion determinant. We also give an explanation for the pathological behaviour of quenched QCD at finite density.

  17. Criticality of the net-baryon number probability distribution at finite density

    Directory of Open Access Journals (Sweden)

    Kenji Morita

    2015-02-01

    Full Text Available We compute the probability distribution P(N of the net-baryon number at finite temperature and quark-chemical potential, μ, at a physical value of the pion mass in the quark-meson model within the functional renormalization group scheme. For μ/T<1, the model exhibits the chiral crossover transition which belongs to the universality class of the O(4 spin system in three dimensions. We explore the influence of the chiral crossover transition on the properties of the net baryon number probability distribution, P(N. By considering ratios of P(N to the Skellam function, with the same mean and variance, we unravel the characteristic features of the distribution that are related to O(4 criticality at the chiral crossover transition. We explore the corresponding ratios for data obtained at RHIC by the STAR Collaboration and discuss their implications. We also examine O(4 criticality in the context of binomial and negative-binomial distributions for the net proton number.

  18. Probability Density Estimation for Non-flat Functions%非平坦函数概率密度估计

    Institute of Scientific and Technical Information of China (English)

    汪洪桥; 蔡艳宁; 付光远; 王仕成

    2016-01-01

    Aiming at the probability density estimation problem for non-flat functions, this paper constructs a single slack factor multi-scale kernel support vector machine (SVM) probability density estimation model, by improving the form of constraint condition of the traditional SVM model and introducing the multi-scale kernel method. In the model, a single slack factor instead of two types of slack factors is used to control the learning error of SVM, which reduces the computational complexity of model. At the same time, by introducing the multi-scale kernel method, the model can well fit the functions with both the fiercely changed region and the flatly changed region. Through several probability density estimation experiments with typical non-flat functions, the results show that the single slack probability density estimation model has faster learning speed than the common SVM model. And compared with the single kernel method, the multi-scale kernel SVM probability density estimation model has better estimation precision.%针对非平坦函数的概率密度估计问题,通过改进支持向量机(support vector machine,SVM)概率密度估计模型约束条件的形式,并引入多尺度核方法,构建了一种单松弛因子多尺度核支持向量机概率密度估计模型。该模型采用合并的单个松弛因子来控制支持向量机的学习误差,减小了模型的计算复杂度;同时引入了多尺度核方法,使得模型既能适应函数剧烈变化的区域,也能适应平缓变化的区域。基于几种典型非平坦函数进行概率密度估计实验,结果证明,单松弛因子概率密度估计模型比常规支持向量机概率密度估计模型具有更快的学习速度;且相比于单核方法,多尺度核支持向量机概率密度估计模型具有更优的估计精度。

  19. Probability density functions for the variable solar wind near the solar cycle minimum

    CERN Document Server

    Vörös,; Leitner, M; Narita, Y; Consolini, G; Kovács, P; Tóth, A; Lichtenberger, J

    2015-01-01

    Unconditional and conditional statistics is used for studying the histograms of magnetic field multi-scale fluctuations in the solar wind near the solar cycle minimum in 2008. The unconditional statistics involves the magnetic data during the whole year 2008. The conditional statistics involves the magnetic field time series splitted into concatenated subsets of data according to a threshold in dynamic pressure. The threshold separates fast stream leading edge compressional and trailing edge uncompressional fluctuations. The histograms obtained from these data sets are associated with both large-scale (B) and small-scale ({\\delta}B) magnetic fluctuations, the latter corresponding to time-delayed differences. It is shown here that, by keeping flexibility but avoiding the unnecessary redundancy in modeling, the histograms can be effectively described by a limited set of theoretical probability distribution functions (PDFs), such as the normal, log-normal, kappa and logkappa functions. In a statistical sense the...

  20. Blue duiker Philantomba monticola densities in the Tsitsikamma National Park and probable factors limiting these populations

    Directory of Open Access Journals (Sweden)

    N. Hanekom

    1991-09-01

    Full Text Available Numbers of blue duikers recorded on 157 and 28 variable width transect counts, done over a two year period in the Tsitsikamma Coastal National Park (TCNP and Tsitsikamma Forest National Park (TFNP respectively, did not differ significantly {P > 0,10 with seasons (summer v. winter. Population density estimates from transects were similar to those from game drives (0,18 v. 0,19 duikers/ha (TCNP and 0,13 v. 0,17 duikers/ha (TFNP; P >0,10, higher than from faecal pellet counts (P < 0,10 and at least three times lower than estimates from the Kenneth Stainbank Nature Reserve and Umdoni Park in Natal. Factors contributing to the low population densities in the Tsitsikamma national parks were investigated. Twentyseven and seven percent of leopard (25 and caracal (12 scats respectively analyzed contained blue duiker remains, but predator numbers appear to be low. Forest characteristics were investigated, and results from this and other studies suggest that undergrowth cover does not markedly influence blue duiker densities in the southern Cape forests. Field and stomach analysis indicate that blue duikers feed primarily on freshly fallen leaves and fruit, and are selective foragers. In the Tsitsikamma national parks (TNPs the frequency of occurrence of trees known to be palatable to duikers are low, while less than 45 percent of the dominant tree species fruit fully annually. This apparent scarcity of food, the low numbers of antelope species and individuals in these forests and results from duiker research in Zaire, suggest that habitat rather than predation is limiting duiker numbers in the Tsitsikamma national parks.

  1. Fusing probability density function into Dempster-Shafer theory of evidence for the evaluation of water treatment plant.

    Science.gov (United States)

    Chowdhury, Shakhawat

    2013-05-01

    The evaluation of the status of a municipal drinking water treatment plant (WTP) is important. The evaluation depends on several factors, including, human health risks from disinfection by-products (R), disinfection performance (D), and cost (C) of water production and distribution. The Dempster-Shafer theory (DST) of evidence can combine the individual status with respect to R, D, and C to generate a new indicator, from which the overall status of a WTP can be evaluated. In the DST, the ranges of different factors affecting the overall status are divided into several segments. The basic probability assignments (BPA) for each segment of these factors are provided by multiple experts, which are then combined to obtain the overall status. In assigning the BPA, the experts use their individual judgments, which can impart subjective biases in the overall evaluation. In this research, an approach has been introduced to avoid the assignment of subjective BPA. The factors contributing to the overall status were characterized using the probability density functions (PDF). The cumulative probabilities for different segments of these factors were determined from the cumulative density function, which were then assigned as the BPA for these factors. A case study is presented to demonstrate the application of PDF in DST to evaluate a WTP, leading to the selection of the required level of upgradation for the WTP.

  2. An extended SMLD approach for presumed probability density function in flamelet combustion model

    CERN Document Server

    Coclite, Alessandro; De Palma, Pietro; Cutrone, Luigi

    2013-01-01

    This paper provides an extension of the standard flamelet progress variable (FPV) approach for turbulent combustion, applying the statistically most likely distribution (SMLD) framework to the joint PDF of the mixture fraction, Z, and the progress variable, C. In this way one does not need to make any assumption about the statistical correlation between Z and C and about the behaviour of the mixture fraction, as required in previous FPV models. In fact, for state-of-the-art models, with the assumption of very-fast-chemistry,Z is widely accepted to behave as a passive scalar characterized by a $\\beta$-distribution function. Instead, the model proposed here, evaluates the most probable joint distribution of Z and C without any assumption on their behaviour and provides an effective tool to verify the adequateness of widely used hypotheses, such as their statistical independence. The model is validated versus three well-known test cases, namely, the Sandia flames. The results are compared with those obtained by ...

  3. Non-stationary random vibration analysis of a 3D train-bridge system using the probability density evolution method

    Science.gov (United States)

    Yu, Zhi-wu; Mao, Jian-feng; Guo, Feng-qi; Guo, Wei

    2016-03-01

    Rail irregularity is one of the main sources causing train-bridge random vibration. A new random vibration theory for the coupled train-bridge systems is proposed in this paper. First, number theory method (NTM) with 2N-dimensional vectors for the stochastic harmonic function (SHF) of rail irregularity power spectrum density was adopted to determine the representative points of spatial frequencies and phases to generate the random rail irregularity samples, and the non-stationary rail irregularity samples were modulated with the slowly varying function. Second, the probability density evolution method (PDEM) was employed to calculate the random dynamic vibration of the three-dimensional (3D) train-bridge system by a program compiled on the MATLAB® software platform. Eventually, the Newmark-β integration method and double edge difference method of total variation diminishing (TVD) format were adopted to obtain the mean value curve, the standard deviation curve and the time-history probability density information of responses. A case study was presented in which the ICE-3 train travels on a three-span simply-supported high-speed railway bridge with excitation of random rail irregularity. The results showed that compared to the Monte Carlo simulation, the PDEM has higher computational efficiency for the same accuracy, i.e., an improvement by 1-2 orders of magnitude. Additionally, the influences of rail irregularity and train speed on the random vibration of the coupled train-bridge system were discussed.

  4. Stochastic Geomorphology: Indexing Climate Change Through Shifts in Probability Densities of Erosion, Sediment Flux, Storage and Habitats

    Science.gov (United States)

    Benda, L. E.

    2009-12-01

    Stochastic geomorphology refers to the interaction of the stochastic field of sediment supply with hierarchically branching river networks where erosion, sediment flux and storage are described by their probability densities. The conceptual and numerical framework can generate a series of general principles (hypotheses) on how basin-scale erosion and sedimentation regimes, viewed through the lens of probability distributions, change with variations in climate, topography, geology, vegetation, basin scale, and network topology; for more detail on the general principles see AGU session EP02. The conceptual and numerical framework of stochastic geomorphology is well suited for forecasting and interpreting affects of climate change on geomorphological systems, including the habitats associated with them. Climate change involves shifts in probability distributions of precipitation (rain and snow), fires, and wind. Consequently, shifts in distributions of precipitation frequency and magnitude or wildfire frequency, intensity and size should lead to shifts in erosion, sediment flux and sedimentation distributions. Shifts could include either a greater or lesser skew of their attendant probability densities. For example, increasing the frequency of fires in a stochastic simulation model of erosion and sedimentation will lead to altered frequency and magnitude of hillslope erosion in the form of pulses of sediment through the river network. This will be reflected in shifts in the probability densities of erosion and sedimentation and also how sediment flux and storage distributions evolve downstream in river networks. Heightened erosion frequency and magnitude due to climate change can increase Hurst Effects in time series of sediment flux and thus an increase in depletion of hillslope stores of sediment can result in temporally lingering sedimentation affects throughout river networks, even if climate relaxed to pre-change conditions. Similarly, heightened hillslope

  5. Contribution from S and P waves in pp annihilation at rest

    CERN Document Server

    Bendiscioli, G; Fontana, A; Montagna, P; Rotondi, A; Salvini, P; Bertin, A; Bruschi, M; Capponi, M; De Castro, S; Donà, R; Galli, D; Giacobbe, B; Marconi, U; Massa, I; Piccinini, M; Cesari, N S; Spighi, R; Vecchi, S; Vagnoni, V M; Villa, M; Vitale, A; Zoccoli, A; Bianconi, A; Bonomi, G; Lodi-Rizzini, E; Venturelli, L; Zenoni, A; Cicalò, C; De Falco, A; Masoni, A; Puddu, G; Serci, S; Usai, G L; Gorchakov, O E; Prakhov, S N; Rozhdestvensky, A M; Tretyak, V I; Poli, M; Gianotti, P; Guaraldo, C; Lanaro, A; Lucherini, V; Petrascu, C; Kudryavtsev, A E; Balestra, F; Bussa, M P; Busso, L; Cerello, P G; Denisov, O Yu; Ferrero, L; Grasso, A; Maggiora, A; Panzarasa, A; Panzieri, D; Tosello, F; Botta, E; Bressani, Tullio; Calvo, D; Costa, S; D'Isep, D; Feliciello, A; Filippi, A; Marcello, S; Mirfakhraee, N; Agnello, M; Iazzi, F; Minetti, B; Tessaro, S

    2001-01-01

    The annihilation frequencies of 19 pp annihilation reactions at rest obtained in different target densities are analysed in order to determine the values of the P-wave annihilation percentage at each target density and the average hadronic branching ratios from P- and S-states. Both the assumptions of linear dependence of the annihilation frequencies on the P-wave annihilation percentage of the protonium state and the approach with the enhancement factors of Batty (1989) are considered. Furthermore the cases of incompatible measurements are discussed. (55 refs).

  6. Application of maximum likelihood to direct methods: the probability density function of the triple-phase sums. XI.

    Science.gov (United States)

    Rius, Jordi

    2006-09-01

    The maximum-likelihood method is applied to direct methods to derive a more general probability density function of the triple-phase sums which is capable of predicting negative values. This study also proves that maximization of the origin-free modulus sum function S yields, within the limitations imposed by the assumed approximations, the maximum-likelihood estimates of the phases. It thus represents the formal theoretical justification of the S function that was initially derived from Patterson-function arguments [Rius (1993). Acta Cryst. A49, 406-409].

  7. Finite-size scaling of the magnetization probability density for the critical Ising model in slab geometry

    Science.gov (United States)

    Lopes Cardozo, David; Holdsworth, Peter C. W.

    2016-04-01

    The magnetization probability density in d  =  2 and 3 dimensional Ising models in slab geometry of volume L\\paralleld-1× {{L}\\bot} is computed through Monte-Carlo simulation at the critical temperature and zero magnetic field. The finite-size scaling of this distribution and its dependence on the system aspect-ratio ρ =\\frac{{{L}\\bot}}{{{L}\\parallel}} and boundary conditions are discussed. In the limiting case ρ \\to 0 of a macroscopically large slab ({{L}\\parallel}\\gg {{L}\\bot} ) the distribution is found to scale as a Gaussian function for all tested system sizes and boundary conditions.

  8. Existence, uniqueness and regularity of a time-periodic probability density distribution arising in a sedimentation-diffusion problem

    Science.gov (United States)

    Nitsche, Ludwig C.; Nitsche, Johannes M.; Brenner, Howard

    1988-01-01

    The sedimentation and diffusion of a nonneutrally buoyant Brownian particle in vertical fluid-filled cylinder of finite length which is instantaneously inverted at regular intervals are investigated analytically. A one-dimensional convective-diffusive equation is derived to describe the temporal and spatial evolution of the probability density; a periodicity condition is formulated; the applicability of Fredholm theory is established; and the parameter-space regions are determined within which the existence and uniqueness of solutions are guaranteed. Numerical results for sample problems are presented graphically and briefly characterized.

  9. Radiative corrections to neutralino annihilation. Recent developments

    Energy Technology Data Exchange (ETDEWEB)

    Herrmann, Bjoern

    2010-11-15

    Evaluating the relic density of dark matter is an interesting possibility to constrain the parameter space of new physics models. However, this calculation is affected by several sources of uncertainty. On the particle physics side, considerable progress has been made in the recent years concerning the calculation of the annihilation cross-section of dark matter, which is needed in this context. In particular, within the Minimal Supersymmetric Standard Model, the theoretical uncertainty has been reduced through the calculation of loop corrections. The present contribution gives an overview over the achievements that have been made in QCD corrections to neutralino pair annihilation. The numerical impact is illustrated for a few examples. (orig.)

  10. Estimation of probability density functions of damage parameter for valve leakage detection in reciprocating pump used in nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Kyeom; Kim, Tae Yun; Kim, Hyun Su; Chai, Jang Bom; Lee, Jin Woo [Div. of Mechanical Engineering, Ajou University, Suwon (Korea, Republic of)

    2016-10-15

    This paper presents an advanced estimation method for obtaining the probability density functions of a damage parameter for valve leakage detection in a reciprocating pump. The estimation method is based on a comparison of model data which are simulated by using a mathematical model, and experimental data which are measured on the inside and outside of the reciprocating pump in operation. The mathematical model, which is simplified and extended on the basis of previous models, describes not only the normal state of the pump, but also its abnormal state caused by valve leakage. The pressure in the cylinder is expressed as a function of the crankshaft angle, and an additional volume flow rate due to the valve leakage is quantified by a damage parameter in the mathematical model. The change in the cylinder pressure profiles due to the suction valve leakage is noticeable in the compression and expansion modes of the pump. The damage parameter value over 300 cycles is calculated in two ways, considering advance or delay in the opening and closing angles of the discharge valves. The probability density functions of the damage parameter are compared for diagnosis and prognosis on the basis of the probabilistic features of valve leakage.

  11. Unit-Sphere Anisotropic Multiaxial Stochastic-Strength Model Probability Density Distribution for the Orientation of Critical Flaws

    Science.gov (United States)

    Nemeth, Noel

    2013-01-01

    Models that predict the failure probability of monolithic glass and ceramic components under multiaxial loading have been developed by authors such as Batdorf, Evans, and Matsuo. These "unit-sphere" failure models assume that the strength-controlling flaws are randomly oriented, noninteracting planar microcracks of specified geometry but of variable size. This report develops a formulation to describe the probability density distribution of the orientation of critical strength-controlling flaws that results from an applied load. This distribution is a function of the multiaxial stress state, the shear sensitivity of the flaws, the Weibull modulus, and the strength anisotropy. Examples are provided showing the predicted response on the unit sphere for various stress states for isotropic and transversely isotropic (anisotropic) materials--including the most probable orientation of critical flaws for offset uniaxial loads with strength anisotropy. The author anticipates that this information could be used to determine anisotropic stiffness degradation or anisotropic damage evolution for individual brittle (or quasi-brittle) composite material constituents within finite element or micromechanics-based software

  12. Antineutron-nucleus annihilation

    CERN Document Server

    Botta, E

    2001-01-01

    The n-nucleus annihilation process has been studied by the OBELIX experiment at the CERN Low Energy Antiproton Ring (LEAR) in the (50-400) MeV/c projectile momentum range on C, Al, Cu, Ag, Sn, and Pb nuclear targets. A systematic survey of the annihilation cross- section, sigma /sub alpha /(A, p/sub n/), has been performed, obtaining information on its dependence on the target mass number and on the incoming n momentum. For the first time the mass number dependence of the (inclusive) final state composition of the process has been analyzed. Production of the rho vector meson has also been examined. (13 refs).

  13. Antiproton Annihilation Propulsion

    Science.gov (United States)

    1985-09-01

    propulsion system, a nuclear thermal hydrogen propulsion system, and an antiproton annihilation propulsion system. Since hauling chemical fuel into low...greater. Section 8.4 and Appendix B contain a comparative cost study of a storable chemical fuel propulsion system, a liquid oxygen/liquid hydrogen

  14. Positron annihilation microprobe

    Energy Technology Data Exchange (ETDEWEB)

    Canter, K.F. [Brandeis Univ., Waltham, MA (United States)

    1997-03-01

    Advances in positron annihilation microprobe development are reviewed. The present resolution achievable is 3 {mu}m. The ultimate resolution is expected to be 0.1 {mu}m which will enable the positron microprobe to be a valuable tool in the development of 0.1 {mu}m scale electronic devices in the future. (author)

  15. Flavored Co-annihilations

    CERN Document Server

    Choudhury, Debtosh; Vempati, Sudhir K

    2011-01-01

    In minimal supergravity (mSUGRA) or CMSSM, one of the main co-annihilating partners of the neutralino is the right handed stau, $\\tilde{\\tau}_R$. In the presence of flavor violation in the right handed sector, the co-annihilating partner would be a flavor mixed state. The flavor effect is two fold: (a) It changes the mass of the $\\tilde{\\tau}_{1}$, thus modifying the parameter space of the co-annihilation and (b) flavor violating scatterings could now contribute to the cross-sections in the early universe. In fact, it is shown that for large enough $\\delta \\sim 0.2$, these processes would constitute the dominant channels in co-annihilation regions. The amount of flavor mixing permissible is constrained by flavor violating $\\tau \\to \\mu$ or $\\tau \\to e$ processes. For $\\Delta_{RR}$ mass insertions, the constraints from flavor violation are not strong enough in some regions of the parameter space due to partial cancellations in the amplitudes. In mSUGRA, the regions with cancelations within LFV amplitudes do no...

  16. Efficient simulation of density and probability of large deviations of sum of random vectors using saddle point representations

    CERN Document Server

    Dey, Santanu

    2012-01-01

    We consider the problem of efficient simulation estimation of the density function at the tails, and the probability of large deviations for a sum of independent, identically distributed, light-tailed and non-lattice random vectors. The latter problem besides being of independent interest, also forms a building block for more complex rare event problems that arise, for instance, in queuing and financial credit risk modeling. It has been extensively studied in literature where state independent exponential twisting based importance sampling has been shown to be asymptotically efficient and a more nuanced state dependent exponential twisting has been shown to have a stronger bounded relative error property. We exploit the saddle-point based representations that exist for these rare quantities, which rely on inverting the characteristic functions of the underlying random vectors. We note that these representations reduce the rare event estimation problem to evaluating certain integrals, which may via importance ...

  17. Lie symmetry analysis of the Lundgren–Monin–Novikov equations for multi-point probability density functions of turbulent flow

    Science.gov (United States)

    Wacławczyk, M.; Grebenev, V. N.; Oberlack, M.

    2017-04-01

    The problem of turbulence statistics described by the Lundgren–Monin–Novikov (LMN) hierarchy of integro-differential equations is studied in terms of its group properties. For this we perform a Lie group analysis of a truncated LMN chain which presents the first two equations in an infinite set of integro-differential equations for the multi-point probability density functions (pdf’s) of velocity. A complete set of point transformations is derived for the one-point pdf’s and the independent variables: sample space of velocity, space and time. For this purpose we use a direct method based on the canonical Lie–Bäcklund operator. Due to the one-way coupling of correlation equations, the present results are complete in the sense that no additional symmetries exist for the first leading equation, even if the full infinite hierarchy is considered.

  18. Charged-Particle Thermonuclear Reaction Rates: II. Tables and Graphs of Reaction Rates and Probability Density Functions

    CERN Document Server

    Iliadis, Christian; Champagne, Art; Coc, Alain; Fitzgerald, Ryan

    2010-01-01

    Numerical values of charged-particle thermonuclear reaction rates for nuclei in the A=14 to 40 region are tabulated. The results are obtained using a method, based on Monte Carlo techniques, that has been described in the preceding paper of this series (Paper I). We present a low rate, median rate and high rate which correspond to the 0.16, 0.50 and 0.84 quantiles, respectively, of the cumulative reaction rate distribution. The meaning of these quantities is in general different from the commonly reported, but statistically meaningless expressions, "lower limit", "nominal value" and "upper limit" of the total reaction rate. In addition, we approximate the Monte Carlo probability density function of the total reaction rate by a lognormal distribution and tabulate the lognormal parameters {\\mu} and {\\sigma} at each temperature. We also provide a quantitative measure (Anderson-Darling test statistic) for the reliability of the lognormal approximation. The user can implement the approximate lognormal reaction rat...

  19. Classical probability density distributions with uncertainty relations for ground states of simple non-relativistic quantum-mechanical systems

    Science.gov (United States)

    Radożycki, Tomasz

    2016-11-01

    The probability density distributions for the ground states of certain model systems in quantum mechanics and for their classical counterparts are considered. It is shown, that classical distributions are remarkably improved by incorporating into them the Heisenberg uncertainty relation between position and momentum. Even the crude form of this incorporation makes the agreement between classical and quantum distributions unexpectedly good, except for the small area, where classical momenta are large. It is demonstrated that the slight improvement of this form, makes the classical distribution very similar to the quantum one in the whole space. The obtained results are much better than those from the WKB method. The paper is devoted to ground states, but the method applies to excited states too.

  20. The probability density function tail of the Kardar-Parisi-Zhang equation in the strongly non-linear regime

    Science.gov (United States)

    Anderson, Johan; Johansson, Jonas

    2016-12-01

    An analytical derivation of the probability density function (PDF) tail describing the strongly correlated interface growth governed by the nonlinear Kardar-Parisi-Zhang equation is provided. The PDF tail exactly coincides with a Tracy-Widom distribution i.e. a PDF tail proportional to \\exp ≤ft(-cw23/2\\right) , where w 2 is the the width of the interface. The PDF tail is computed by the instanton method in the strongly non-linear regime within the Martin-Siggia-Rose framework using a careful treatment of the non-linear interactions. In addition, the effect of spatial dimensions on the PDF tail scaling is discussed. This gives a novel approach to understand the rightmost PDF tail of the interface width distribution and the analysis suggests that there is no upper critical dimension.

  1. Large-eddy simulation/probability density function modeling of local extinction and re-ignition in Sandia Flame E

    Science.gov (United States)

    Wang, Haifeng; Popov, Pavel; Hiremath, Varun; Lantz, Steven; Viswanathan, Sharadha; Pope, Stephen

    2010-11-01

    A large-eddy simulation (LES)/probability density function (PDF) code is developed and applied to the study of local extinction and re-ignition in Sandia Flame E. The modified Curl mixing model is used to account for the sub-filter scalar mixing; the ARM1 mechanism is used for the chemical reaction; and the in- situ adaptive tabulation (ISAT) algorithm is used to accelerate the chemistry calculations. Calculations are performed on different grids to study the resolution requirement for this flame. Then, with sufficient grid resolution, full-scale LES/PDF calculations are performed to study the flame characteristics and the turbulence-chemistry interactions. Sensitivity to the mixing frequency model is explored in order to understand the behavior of sub-filter scalar mixing in the context of LES. The simulation results are compared to the experimental data to demonstrate the capability of the code. Comparison is also made to previous RANS/PDF simulations.

  2. On fading probability density functions of fast-tracked and untracked free-space optical communication channels

    Science.gov (United States)

    Zhao, Zhijun; Liao, Rui

    2011-03-01

    Free-space optical (FSO) communication systems suffer from average power loss and instantaneous power fading due to the atmospheric turbulence. The channel fading probability density function (pdf) is of critical importance for FSO communication system design and evaluation. The performance and reliability of FSO communication systems can be greatly enhanced if fast-tacking devices are employed at the transmitter in order to compensate laser beam wander at the receiver aperture. The fast-tracking method is especially effective when communication distance is long. This paper studies the fading probability density functions of both fast-tracked and untracked FSO communication channels. Large-scale wave-optics simulations are conducted for both tracked and untracked lasers. In the simulations, the Kolmogorov spectrum is adopted, and it is assumed that the outer scale is infinitely large and the inner scale is negligibly small. The fading pdfs of both fast-tracked and untracked FSO channels are obtained from the simulations. Results show that the fast-tracked channel fading can be accurately modeled as gamma-distributed if receiver aperture size is smaller than the coherence radius. An analytical method is given for calculating the untracked fading pdfs of both point-like and finite-size receiver apertures from the fast-tracked fading pdf. For point-like apertures, the analytical method gives pdfs close to the well-known gamma-gamma pdfs if off-axis effects are omitted in the formulation. When off-axis effects are taken into consideration, the untracked pdfs obtained using the analytical method fit the simulation pdfs better than gamma-gamma distributions for point-like apertures, and closely fit the simulation pdfs for finite-size apertures where gamma-gamma pdfs deviate from those of the simulations significantly.

  3. Ignition Probability

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — USFS, State Forestry, BLM, and DOI fire occurrence point locations from 1987 to 2008 were combined and converted into a fire occurrence probability or density grid...

  4. The influence of part-word phonotactic probability/neighborhood density on word learning by preschool children varying in expressive vocabulary.

    Science.gov (United States)

    Storkel, Holly L; Hoover, Jill R

    2011-06-01

    The goal of this study was to examine the influence of part-word phonotactic probability/neighborhood density on word learning by preschool children with normal vocabularies that varied in size. Ninety-eight children (age 2 ; 11-6 ; 0) were taught consonant-vowel-consonant (CVC) nonwords orthogonally varying in the probability/density of the CV (i.e. body) and VC (i.e. rhyme). Learning was measured via picture naming. Children with the lowest expressive vocabulary scores showed no effect of either CV or VC probability/density, although floor effects could not be ruled out. In contrast, children with low or high expressive vocabulary scores demonstrated sensitivity to part-word probability/density with the nature of the effect varying by group. Children with the highest expressive vocabulary scores displayed yet a third pattern of part-word probability/density effects. Taken together, word learning by preschool children was influenced by part-word probability/density but the nature of this influence appeared to depend on the size of the lexicon.

  5. Constraints on dark matter annihilation to fermions and a photon

    CERN Document Server

    Chowdhury, Debtosh; Laha, Ranjan

    2016-01-01

    We consider Majorana dark matter annihilation to fermion - anti-fermion pair and a photon in the effective field theory paradigm, by introducing dimension 6 and dimension 8 operators in the Lagrangian. For a given value of the cut-off scale, the latter dominates the annihilation process for heavier dark matter masses. We find a cancellation in the dark matter annihilation to a fermion - anti-fermion pair when considering the interference of the dimension 6 and the dimension 8 operators. Constraints on the effective scale cut-off is derived while considering indirect detection experiments and the relic density requirements and then comparing them to the bound coming from collider experiments.

  6. Black Hole Window into p-Wave Dark Matter Annihilation.

    Science.gov (United States)

    Shelton, Jessie; Shapiro, Stuart L; Fields, Brian D

    2015-12-01

    We present a new method to measure or constrain p-wave-suppressed cross sections for dark matter (DM) annihilations inside the steep density spikes induced by supermassive black holes. We demonstrate that the high DM densities, together with the increased velocity dispersion, within such spikes combine to make thermal p-wave annihilation cross sections potentially visible in γ-ray observations of the Galactic center (GC). The resulting DM signal is a bright central point source with emission originating from DM annihilations in the absence of a detectable spatially extended signal from the halo. We define two simple reference theories of DM with a thermal p-wave annihilation cross section and establish new limits on the combined particle and astrophysical parameter space of these models, demonstrating that Fermi Large Area Telescope is currently sensitive to thermal p-wave DM over a wide range of possible scenarios for the DM distribution in the GC.

  7. Magnetic Enhancements to Dark Matter Annihilation

    Science.gov (United States)

    Gardner, William G.; Tinsley, Todd

    2017-01-01

    The rate of dark matter annihilation should be greatest where the dark matter density is maximal. This is typically in the gravity wells of large stars where it also happens to be true that magnetic fields can be very large. In this poster we present an examination of how these intense magnetic fields can alter the cross section for dark matter annihilation into electron-positron pairs. We work within the framework of the minimally supersymmetric extension to the Standard Model (MSSM), and we choose its lightest neutralino as our dark matter candidate. Within this theory, dark matter can annihilate into many different final-state particles through several channels. We restrict our analysis to an electron-positron pair final state because of the low mass and reasonable detection signature. Since strong magnetic fields change how momentum is conserved for charged particles, this calculation investigates the relationship between the annihilation cross section and the electron's and positron's landau level. This is work is supported by NASA/Arkansas Space Grant Consortium and the Hendrix College Odyssey Program.

  8. Bubble chamber: antiproton annihilation

    CERN Multimedia

    1971-01-01

    These images show real particle tracks from the annihilation of an antiproton in the 80 cm Saclay liquid hydrogen bubble chamber. A negative kaon and a neutral kaon are produced in this process, as well as a positive pion. The invention of bubble chambers in 1952 revolutionized the field of particle physics, allowing real tracks left by particles to be seen and photographed by expanding liquid that had been heated to boiling point.

  9. Fidelity and breeding probability related to population density and individual quality in black brent geese Branta bernicla nigricans

    Science.gov (United States)

    Sedinger, J.S.; Chelgren, N.D.; Ward, D.H.; Lindberg, M.S.

    2008-01-01

    1. Patterns of temporary emigration (associated with non-breeding) are important components of variation in individual quality. Permanent emigration from the natal area has important implications for both individual fitness and local population dynamics. 2. We estimated both permanent and temporary emigration of black brent geese (Branta bernicla nigricans Lawrence) from the Tutakoke River colony, using observations of marked brent geese on breeding and wintering areas, and recoveries of ringed individuals by hunters. We used the likelihood developed by Lindberg, Kendall, Hines & Anderson 2001 (Combining band recovery data and Pollock's robust design to model temporary and permanent emigration. Biometrics, 57, 273-281) to assess hypotheses and estimate parameters. 3. Temporary emigration (the converse of breeding) varied among age classes up to age 5, and differed between individuals that bred in the previous years vs. those that did not. Consistent with the hypothesis of variation in individual quality, individuals with a higher probability of breeding in one year also had a higher probability of breeding the next year. 4. Natal fidelity of females ranged from 0.70 ?? 0.07-0.96 ?? 0.18 and averaged 0.83. In contrast to Lindberg et al. (1998), we did not detect a relationship between fidelity and local population density. Natal fidelity was negatively correlated with first-year survival, suggesting that competition among individuals of the same age for breeding territories influenced dispersal. Once females nested at the Tutakoke River, colony breeding fidelity was 1.0. 5. Our analyses show substantial variation in individual quality associated with fitness, which other analyses suggest is strongly influenced by early environment. Our analyses also suggest substantial interchange among breeding colonies of brent geese, as first shown by Lindberg et al. (1998).

  10. Assessment of probability density function based on POD reduced-order model for ensemble-based data assimilation

    Energy Technology Data Exchange (ETDEWEB)

    Kikuchi, Ryota; Misaka, Takashi; Obayashi, Shigeru, E-mail: rkikuchi@edge.ifs.tohoku.ac.jp [Institute of Fluid Science, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai, Miyagi 980-8577 (Japan)

    2015-10-15

    An integrated method of a proper orthogonal decomposition based reduced-order model (ROM) and data assimilation is proposed for the real-time prediction of an unsteady flow field. In this paper, a particle filter (PF) and an ensemble Kalman filter (EnKF) are compared for data assimilation and the difference in the predicted flow fields is evaluated focusing on the probability density function (PDF) of the model variables. The proposed method is demonstrated using identical twin experiments of an unsteady flow field around a circular cylinder at the Reynolds number of 1000. The PF and EnKF are employed to estimate temporal coefficients of the ROM based on the observed velocity components in the wake of the circular cylinder. The prediction accuracy of ROM-PF is significantly better than that of ROM-EnKF due to the flexibility of PF for representing a PDF compared to EnKF. Furthermore, the proposed method reproduces the unsteady flow field several orders faster than the reference numerical simulation based on the Navier–Stokes equations. (paper)

  11. A biology-driven receptor model for daily pollen allergy risk in Korea based on Weibull probability density function

    Science.gov (United States)

    Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo

    2017-02-01

    Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.

  12. Assessment of probability density function based on POD reduced-order model for ensemble-based data assimilation

    Science.gov (United States)

    Kikuchi, Ryota; Misaka, Takashi; Obayashi, Shigeru

    2015-10-01

    An integrated method of a proper orthogonal decomposition based reduced-order model (ROM) and data assimilation is proposed for the real-time prediction of an unsteady flow field. In this paper, a particle filter (PF) and an ensemble Kalman filter (EnKF) are compared for data assimilation and the difference in the predicted flow fields is evaluated focusing on the probability density function (PDF) of the model variables. The proposed method is demonstrated using identical twin experiments of an unsteady flow field around a circular cylinder at the Reynolds number of 1000. The PF and EnKF are employed to estimate temporal coefficients of the ROM based on the observed velocity components in the wake of the circular cylinder. The prediction accuracy of ROM-PF is significantly better than that of ROM-EnKF due to the flexibility of PF for representing a PDF compared to EnKF. Furthermore, the proposed method reproduces the unsteady flow field several orders faster than the reference numerical simulation based on the Navier-Stokes equations.

  13. A biology-driven receptor model for daily pollen allergy risk in Korea based on Weibull probability density function

    Science.gov (United States)

    Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo

    2016-07-01

    Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.

  14. Annihilators of nilpotent elements

    Directory of Open Access Journals (Sweden)

    Abraham A. Klein

    2005-01-01

    Full Text Available Let x be a nilpotent element of an infinite ring R (not necessarily with 1. We prove that A(x—the two-sided annihilator of x—has a large intersection with any infinite ideal I of R in the sense that card(A(x∩I=cardI. In particular, cardA(x=cardR; and this is applied to prove that if N is the set of nilpotent elements of R and R≠N, then card(R\\N≥cardN.

  15. Semi-Annihilating Wino-Like Dark Matter

    CERN Document Server

    Spray, Andrew P

    2015-01-01

    Semi-annihilation is a generic feature of dark matter theories with symmetries larger than Z_2. We explore a model based on a Z_4-symmetric dark sector comprised of a scalar singlet and a "wino"-like fermion SU(2)_L triplet. This is the minimal example of semi-annihilation with a gauge-charged fermion. We study the interplay of the Sommerfeld effect in both annihilation and semi-annihilation channels. The modifications to the relic density allow otherwise-forbidden regions of parameter space and can substantially weaken indirect detection constraints. We perform a parameter scan and find that the entire region where the model comprises all the observed dark matter is accessible to current and planned direct and indirect searches.

  16. Model assembly for estimating cell surviving fraction for both targeted and nontargeted effects based on microdosimetric probability densities.

    Directory of Open Access Journals (Sweden)

    Tatsuhiko Sato

    Full Text Available We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells and Neo cells (neomycin resistant gene-expressing HeLa cells irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities.

  17. Applications of the line-of-response probability density function resolution model in PET list mode reconstruction.

    Science.gov (United States)

    Jian, Y; Yao, R; Mulnix, T; Jin, X; Carson, R E

    2015-01-07

    Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners-the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [(11)C]AFM rats imaged on the HRRT and [(11)C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods.

  18. Model assembly for estimating cell surviving fraction for both targeted and nontargeted effects based on microdosimetric probability densities.

    Science.gov (United States)

    Sato, Tatsuhiko; Hamada, Nobuyuki

    2014-01-01

    We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK) model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells) and Neo cells (neomycin resistant gene-expressing HeLa cells) irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET) values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities.

  19. Annihilation of Antiprotons in Light Nuclei

    Institute of Scientific and Technical Information of China (English)

    M. A. Rana; E. U. Khan; M. I. Shahzad; I. E. Qureshi; F. Malik; G. Sher; S. Manzoor; H. A. Khan

    2006-01-01

    @@ CR-39 detectors have been exposed to a 5.9-MeV antiproton beam using the low energy antiproton ring (LEAR) facility at CERN. At this energy, tracks of antiprotons appear in a CR-39 detector after 135 min of etching in 6 M NaOH at 70℃ . Fluence of the antiproton beam has been determined using track density. We have also found tracks in the etched CR-39 detector at different depths (250-500μm). These tracks have resulted from the annihilation of antiprotons with the constituents (H, C and O) of the CR-39 detector. The goal of the experiment is to develop a simple and low-cost method to study properties of antiparticles and those formed after annihilation of these particles with the target matter.

  20. Biological Effectiveness of Antiproton Annihilation

    DEFF Research Database (Denmark)

    Maggiore, C.; Agazaryan, N.; Bassler, N.;

    2004-01-01

    from the annihilation of antiprotons produce an increase in ‘‘biological dose’’ in the vicinity of the narrow Bragg peak for antiprotons compared to protons. This experiment is the first direct measurement of the biological effects of antiproton annihilation. The background, description, and status...

  1. Effect of Phonotactic Probability and Neighborhood Density on Word-Learning Configuration by Preschoolers with Typical Development and Specific Language Impairment

    Science.gov (United States)

    Gray, Shelley; Pittman, Andrea; Weinhold, Juliet

    2014-01-01

    Purpose: In this study, the authors assessed the effects of phonotactic probability and neighborhood density on word-learning configuration by preschoolers with specific language impairment (SLI) and typical language development (TD). Method: One hundred thirty-one children participated: 48 with SLI, 44 with TD matched on age and gender, and 39…

  2. Positron Annihilation Studies of VVER Type Reactor Steels

    OpenAIRE

    Brauer, G.

    1995-01-01

    A summary of recent positron annihilation work on Russian VVER type reactor steels is presented. Thereby, special attention is paid to the outline of basic processes that might help to understand the positron behaviour in this class of industrial material. The idea of positron trapping by irradiation-induced precipitates, which are probably carbides, is discussed in detail.

  3. Monomer Migration and Annihilation Processes

    Institute of Scientific and Technical Information of China (English)

    KE Jian-Hong; LIN Zhen-Quan; ZHUANG You-Yi

    2005-01-01

    We propose a two-species monomer migration-annihilation model, in which monomer migration reactions occur between any two aggregates of the same species and monomer annihilation reactions occur between two different species. Based on the mean-field rate equations, we investigate the evolution behaviors of the processes. For the case with an annihilation rate kernel proportional to the sizes of the reactants, the aggregation size distribution of either species approaches the modified scaling form in the symmetrical initial case, while for the asymmetrical initial case the heavy species with a large initial data scales according to the conventional form and the light one does not scale. Moreover,at most one species can survive finally. For the case with aconstant annihilation rate kernel, both species may scale according to the conventional scaling law in the symmetrical case and survive together at the end.

  4. Dark Matter Annihilation at the Galactic Center

    Energy Technology Data Exchange (ETDEWEB)

    Linden, Timothy Ryan [Univ. of California, Santa Cruz, CA (United States)

    2013-06-01

    Observations by the WMAP and PLANCK satellites have provided extraordinarily accurate observations on the densities of baryonic matter, dark matter, and dark energy in the universe. These observations indicate that our universe is composed of approximately ve times as much dark matter as baryonic matter. However, e orts to detect a particle responsible for the energy density of dark matter have been unsuccessful. Theoretical models have indicated that a leading candidate for the dark matter is the lightest supersymmetric particle, which may be stable due to a conserved R-parity. This dark matter particle would still be capable of interacting with baryons via weak-force interactions in the early universe, a process which was found to naturally explain the observed relic abundance of dark matter today. These residual annihilations can persist, albeit at a much lower rate, in the present universe, providing a detectable signal from dark matter annihilation events which occur throughout the universe. Simulations calculating the distribution of dark matter in our galaxy almost universally predict the galactic center of the Milky Way Galaxy (GC) to provide the brightest signal from dark matter annihilation due to its relative proximity and large simulated dark matter density. Recent advances in telescope technology have allowed for the rst multiwavelength analysis of the GC, with suitable e ective exposure, angular resolution, and energy resolution in order to detect dark matter particles with properties similar to those predicted by the WIMP miracle. In this work, I describe ongoing e orts which have successfully detected an excess in -ray emission from the region immediately surrounding the GC, which is di cult to describe in terms of standard di use emission predicted in the GC region. While the jury is still out on any dark matter interpretation of this excess, I describe several related observations which may indicate a dark matter origin. Finally, I discuss the

  5. ATHENA: an actual antihydrogen annihilation

    CERN Multimedia

    2002-01-01

    This is an image of an actual matter-antimatter annihilation due to an atom of antihydrogen in the ATHENA experiment, located on the Antiproton Decelerator (AD) at CERN since 2001. The antiproton produces four charged pions (yellow) whose positions are given by silicon microstrips (pink) before depositing energy in CsI crystals (yellow cubes). The positron also annihilates to produce back-to-back gamma rays (red).

  6. A very efficient approach to compute the first-passage probability density function in a time-changed Brownian model: Applications in finance

    Science.gov (United States)

    Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide

    2016-12-01

    We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.

  7. The Sherrington-Kirkpatrick spin glass model in the presence of a random field with a joint Gaussian probability density function for the exchange interactions and random fields

    Science.gov (United States)

    Hadjiagapiou, Ioannis A.

    2014-03-01

    The magnetic systems with disorder form an important class of systems, which are under intensive studies, since they reflect real systems. Such a class of systems is the spin glass one, which combines randomness and frustration. The Sherrington-Kirkpatrick Ising spin glass with random couplings in the presence of a random magnetic field is investigated in detail within the framework of the replica method. The two random variables (exchange integral interaction and random magnetic field) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ. The thermodynamic properties and phase diagrams are studied with respect to the natural parameters of both random components of the system contained in the probability density. The de Almeida-Thouless line is explored as a function of temperature, ρ and other system parameters. The entropy for zero temperature as well as for non zero temperatures is partly negative or positive, acquiring positive branches as h0 increases.

  8. Probability Density and Statistical Properties for a Three-State Markovian Noise and Escape of Particles for a System Driven by This Noise

    Institute of Scientific and Technical Information of China (English)

    LI Jing-Hui

    2008-01-01

    A three-state Markovian noise is investigated. Its probability density and statistical properties are obtained. Escape of particles for a system with potential barrier only driven by this noise is investigated. It is shown that, in some circumstances, this noise can make the particles escape over the potential barrier; but in other circumstances, it cannot. Resonant activation phenomenon appears for the system considered by us.

  9. Joint Behaviour of Semirecursive Kernel Estimators of the Location and of the Size of the Mode of a Probability Density Function

    Directory of Open Access Journals (Sweden)

    Abdelkader Mokkadem

    2011-01-01

    Full Text Available Let and denote the location and the size of the mode of a probability density. We study the joint convergence rates of semirecursive kernel estimators of and . We show how the estimation of the size of the mode allows measuring the relevance of the estimation of its location. We also enlighten that, beyond their computational advantage on nonrecursive estimators, the semirecursive estimators are preferable to use for the construction of confidence regions.

  10. Probability density evolution method of fatigue strength-life relationship%疲劳强度-寿命关系的概率密度演化方法

    Institute of Scientific and Technical Information of China (English)

    徐亚洲; 白国良

    2013-01-01

    Taking degradation of fatigue strength with increase in loading cycles into account, the joint probability density evolution equation of fatigue strength and random factors was derived using Euler description. A probability density S-N (p-S-N) surface was obtained with a numerical method, it was used to calculate the p-S-N curve with a given survivability. The analysis based on experimental fatigue data indicated that p-S-N curves with 95% survivability predicted by p-S-N, Monte Carlo simulation and S-N relation ship having parameters with a given fractile agree well; p-S-N provides a complete probability description of S-N relationship, independent of a probability distribution assumption.%基于疲劳强度随加载循环次数增加不断劣化的物理事实,采用Euler描述推导出疲劳强度与随机参数联合概率密度函数满足的演化方程.采用数值求解方法,给出疲劳强度-寿命概率密度曲面(probability density S-N),并可据此计算给定存活率的P-S-N曲线.基于疲劳试验结果的算例分析表明,疲劳强度-寿命概率密度曲面、MonteCarlo模拟及具有给定分位数参数的S-N关系三者计算的p-S-N曲线吻合良好.疲劳强度-寿命概率密度演化方法可不依赖分布假定给出S-N关系的完备概率描述.

  11. Sommerfeld enhancement of invisible dark matter annihilation in galaxies and galaxy clusters

    CERN Document Server

    Chan, Man Ho

    2016-01-01

    Recent observations indicate that core-like dark matter structures exist in many galaxies, while numerical simulations reveal a singular dark matter density profile at the center. In this article, I show that if the annihilation of dark matter particles gives invisible sterile neutrinos, the Sommerfeld enhancement of the annihilation cross-section can give a sufficiently large annihilation rate to solve the core-cusp problem. The resultant core density, core radius, and their scaling relation generally agree with recent empirical fits from observations. Also, this model predicts that the resultant core-like structures in dwarf galaxies can be easily observed, but not for large normal galaxies and galaxy clusters.

  12. Biological effectiveness of antiproton annihilation

    CERN Document Server

    Holzscheiter, Michael H.; Bassler, Niels; Beyer, Gerd; De Marco, John J.; Doser, Michael; Ichioka, Toshiyasu; Iwamoto, Keisuke S.; Knudsen, Helge V.; Landua, Rolf; Maggiore, Carl; McBride, William H.; Møller, Søren Pape; Petersen, Jorgen; Smathers, James B.; Skarsgard, Lloyd D.; Solberg, Timothy D.; Uggerhøj, Ulrik I.; Withers, H.Rodney; Vranjes, Sanja; Wong, Michelle; Wouters, Bradly G.

    2004-01-01

    We describe an experiment designed to determine whether or not the densely ionizing particles emanating from the annihilation of antiprotons produce an increase in “biological dose” in the vicinity of the narrow Bragg peak for antiprotons compared to protons. This experiment is the first direct measurement of the biological effects of antiproton annihilation. The experiment has been approved by the CERN Research Board for running at the CERN Antiproton Decelerator (AD) as AD-4/ACE (Antiproton Cell Experiment) and has begun data taking in June of 2003. The background, description and the current status of the experiment are given.

  13. Biological effectiveness of antiproton annihilation

    DEFF Research Database (Denmark)

    Holzscheiter, M.H.; Agazaryan, N.; Bassler, Niels;

    2004-01-01

    We describe an experiment designed to determine whether or not the densely ionizing particles emanating from the annihilation of antiprotons produce an increase in ‘‘biological dose’’ in the vicinity of the narrow Bragg peak for antiprotons compared to protons. This experiment is the first direct...... measurement of the biological effects of antiproton annihilation. The experiment has been approved by the CERN Research Board for running at the CERN Antiproton Decelerator (AD) as AD-4/ACE (Antiproton Cell Experiment) and has begun data taking in June of 2003. The background, description and the current...

  14. Positron life time and annihilation Doppler broadening measurements on transition metal complexes

    Energy Technology Data Exchange (ETDEWEB)

    Levay, B. (Eoetvoes Lorand Tudomanyegyetem, Budapest (Hungary). Fizikai Kemiai es Radiologiai Tanszek); Varhelyi, Cs. (Babes-Bolyai Univ., Cluj (Romania)); Burger, K. (Eoetvoes Lorand Tudomanyegyetem, Budapest (Hungary). Szervetlen es Analitikai Kemiai Intezet)

    1982-01-01

    Positron life time and annihilation Doppler broadening measurements have been carried out on 44 solid coordination compounds. Several correlations have been found between the annihilation life time (tau/sub 1/) and line shape parameters (L) and the chemical structure of the compounds. Halide ligands were the most active towards positrons. This fact supports the assumption on the possible formation of (e/sup +/X/sup -/) positron-halide bound state. The life time was decreasing and the annihilation energy spectra were broadening with the increasing negative character of the halides. The aromatic base ligands affected the positron-halide interaction according to their basicity and space requirement and thus they indirectly affected the annihilation parameters, too. In the planar and tetrahedral complexes the electron density on the central met--al ion affected directly the annihilation parameters, while in the octahedral mixed complexes it had only an ind--irect effect through the polarization of the halide ligands.

  15. H2: entanglement, probability density function, confined Kratzer oscillator, universal potential and (Mexican hat- or bell-type) potential energy curves

    CERN Document Server

    Van Hooydonk, G

    2011-01-01

    We review harmonic oscillator theory for closed, stable quantum systems. The H2 potential energy curve (PEC) of Mexican hat-type, calculated with a confined Kratzer oscillator, is better than the Rydberg-Klein-Rees (RKR) H2 PEC. Compared with QM, the theory of chemical bonding is simplified, since a confined Kratzer oscillator gives the long sought for universal function, once called the Holy Grail of Molecular Spectroscopy. This is validated with HF, I2, N2 and O2 PECs. We quantify the entanglement of spatially separated H2 quantum states, which gives a braid view. The equal probability for H2, originating either from HA+HB or HB+HA, is quantified with a Gauss probability density function. At the Bohr scale, confined harmonic oscillators behave properly at all extremes of bound two-nucleon quantum systems and are likely to be useful also at the nuclear scale.

  16. 条件密度置信区间的覆盖精度%Coverage Accuracy of Confidence Intervals for a Conditional Probability Density Function

    Institute of Scientific and Technical Information of China (English)

    雷庆祝; 秦永松

    2007-01-01

    本文用经验似然方法讨论了条件密度的置信区间的构造.通过对覆盖概率的Edgeworth展开得到了经验似然置信区间的覆盖精度,同时证明了条件密度的经验似然置信区间的Bartlett可修正性.%Point-wise confidence intervals for a conditional probability density function are considered.The confidence intervals are based on the empirical likelihood. Their coverage accuracy is assessed by developing Edgeworth expansions for the coverage probabilities. It is shown that the empirical likelihood confidence intervals are Bartlett correctable.

  17. 基于新概率密度函数的ICA盲源分离%ICA Blind Signal Separation Based on a New Probability Density Function

    Institute of Scientific and Technical Information of China (English)

    张娟娟; 邸双亮

    2014-01-01

    This paper is concerned with the blind source separation (BSS) problem of super-Gaussian and sub-Gaussian mixed signal by using the maximum likelihood method, which is based on independent component analysis (ICA) method. In this paper, we construct a new type of probability density function (PDF) which is different from the already existing PDF used to separate mixed signals in the previously published papers. Applying the new constructed PDF to estimate probability density of super-Gaussian and sub-Gaussian signals (assuming the source signals are independent of each other), it is not necessary to change the parameter values artificially, and the separation work may be performed adaptively. Numerical experiments verify the feasibility of the newly constructed PDF, and the convergence time and the separation effect are improved compared with the original algorithm.%基于独立分量分析(Independent Component Analysis, ICA),利用极大似然估计法,研究了超高斯和亚高斯的混合信号的盲源分离(Blind Sources Separation, BSS)问题。文中构造了一种新的、不同于以往文章中用来分离混合信号的概率密度函数(Probability Density Function, PDF)。新构造的PDF无需改变函数中的参数值,可用来对于超高斯和亚高斯信号的概率密度进行估计(假设未知源信号是相互独立的)。数值实验验证了新构造的PDF的可行性,与原算法相比,收敛时间和分离效果都得到了较大的改善。

  18. Ionization compression impact on dense gas distribution and star formation, Probability density functions around H ii regions as seen by Herschel

    CERN Document Server

    Tremblin, P; Minier, V; Didelon, P; Hill, T; Anderson, L D; Motte, F; Zavagno, A; André, Ph; Arzoumanian, D; Audit, E; Benedettini, M; Bontemps, S; Csengeri, T; Di Francesco, J; Giannini, T; Hennemann, M; Luong, Q Nguyen; Marston, A P; Peretto, N; Rivera-Ingraham, A; Russeil, D; Rygl, K L J; Spinoglio, L; White, G J

    2014-01-01

    Ionization feedback should impact the probability distribution function (PDF) of the column density around the ionized gas. We aim to quantify this effect and discuss its potential link to the Core and Initial Mass Function (CMF/IMF). We used in a systematic way Herschel column density maps of several regions observed within the HOBYS key program: M16, the Rosette and Vela C molecular cloud, and the RCW 120 H ii region. We fitted the column density PDFs of all clouds with two lognormal distributions, since they present a double-peak or enlarged shape in the PDF. Our interpretation is that the lowest part of the column density distribution describes the turbulent molecular gas while the second peak corresponds to a compression zone induced by the expansion of the ionized gas into the turbulent molecular cloud. The condensations at the edge of the ionized gas have a steep compressed radial profile, sometimes recognizable in the flattening of the power-law tail. This could lead to an unambiguous criterion able t...

  19. The Dark Matter Annihilation Boost from Low-Temperature Reheating

    CERN Document Server

    Erickcek, Adrienne L

    2015-01-01

    The evolution of the Universe between inflation and the onset of Big Bang Nucleosynthesis is difficult to probe and largely unconstrained. This ignorance profoundly limits our understanding of dark matter: we cannot calculate its thermal relic abundance without knowing when the Universe became radiation dominated. Fortunately, small-scale density perturbations provide a probe of the early Universe that could break this degeneracy. If dark matter is a thermal relic, density perturbations that enter the horizon during an early matter-dominated era grow linearly with the scale factor prior to reheating. The resulting abundance of substructure boosts the annihilation rate by several orders of magnitude, which can compensate for the smaller annihilation cross sections that are required to generate the observed dark matter density in these scenarios. In particular, thermal relics with masses less than a TeV that thermally and kinetically decouple prior to reheating may already be ruled out by Fermi-LAT observations...

  20. Theoretical study on the positron annihilation in Rocksalt structured magnesium oxide

    Institute of Scientific and Technical Information of China (English)

    Liu Jian-Dang; Zhang Jie; Zhang Li-Juan; Hao Ying-Ping; Guo Wei-Feng; Cheng Bin; Ye Bang-Jiao

    2011-01-01

    Based on the atomic superposition approximation (ATSUP) and first-principles pseudopotential plane-wave methods, the bulk and Mg mono-vacancy positron lifetime of magnesium oxide were calculated using Arponen-Pajamme and Boro(n)ski-Nieminen positron-annihilation-rate interpolation formula respectively. The calculated values are in good agreement with experimental values and the first-principles method gives more convincing results. The positron annihilation density spectra analysis reveals that positrons mainly annihilate with valence electrons of oxygen atoms when the magnesium-vacancy appears within magnesium oxide.

  1. Effect of positron-atom interactions on the annihilation gamma spectra of molecules

    CERN Document Server

    Green, D G; Wang, F; Gribakin, G F; Surko, C M

    2012-01-01

    Calculations of gamma spectra for positron annihilation on a selection of molecules, including methane and its fluoro-substitutes, ethane, propane, butane and benzene are presented. The annihilation gamma spectra characterise the momentum distribution of the electron-positron pair at the instant of annihilation. The contribution to the gamma spectra from individual molecular orbitals is obtained from electron momentum densities calculated using modern computational quantum chemistry density functional theory tools. The calculation, in its simplest form, effectively treats the low-energy (thermalised, room-temperature) positron as a plane wave and gives annihilation gamma spectra that are about 40% broader than experiment, although the main chemical trends are reproduced. We show that this effective "narrowing" of the experimental spectra is due to the action of the molecular potential on the positron, chiefly, due to the positron repulsion from the nuclei. It leads to a suppression of the contribution of smal...

  2. On ARMA Probability Density Estimation.

    Science.gov (United States)

    1981-12-01

    definitions of the constants B0k=Ol ,...,q) and ak(kl,...,p) will be given which, for a given function f(.), uniquely define an approximator f pCq .) for each...satisfied. When using f pCq () for approximation purposes it is thus important to always verify whether or not this condition is met. In concluding

  3. New Limits on Thermally annihilating Dark Matter from Neutrino Telescopes

    CERN Document Server

    Lopes, José

    2016-01-01

    We used a consistent and robust solar model to obtain upper limits placed by neutrino telescopes, such as Ice- Cube and Super-Kamiokande, on the Dark Matter-nucleon scattering cross-section, for a general model of Dark Matter with a velocity dependent (p-wave) thermally averaged cross-section. In this picture, the Boltzmann equation for the Dark Matter abundance is numerically solved satisfying the Dark Matter density measured from the Cosmic Microwave Background (CMB). We show that for lower cross-sections and higher masses, the Dark Matter annihilation rate drops sharply, resulting in upper bounds on the scattering cross-section one order of magnitude above those derived from a velocity independent (s-wave) annihilation cross-section. Our results show that upper limits on the scattering cross-section obtained from Dark Matter annihilating in the Sun are sensible to the uncertainty in current standard solar models, fluctuating a maximum of 20 % depending on the annihilation channel.

  4. New Limits on Thermally Annihilating Dark Matter from Neutrino Telescopes

    Science.gov (United States)

    Lopes, J.; Lopes, I.

    2016-08-01

    We used a consistent and robust solar model to obtain upper limits placed by neutrino telescopes, such as IceCube and Super-Kamiokande, on the dark matter-nucleon scattering cross-section, for a general model of dark matter with a velocity dependent (p-wave) thermally averaged cross-section. In this picture, the Boltzmann equation for the dark matter abundance is numerically solved, satisfying the dark matter density measured from the cosmic microwave background. We show that for lower cross-sections and higher masses, the dark matter annihilation rate drops sharply, resulting in upper bounds on the scattering cross-section that are one order of magnitude above those derived from a velocity independent (s-wave) annihilation cross-section. Our results show that upper limits on the scattering cross-section obtained from dark matter annihilating in the Sun are sensible to the uncertainty in current standard solar models, fluctuating by a maximum of 20% depending on the annihilation channel.

  5. How efficient is the Langacker-Pi mechanism of monopole annihilation?

    CERN Document Server

    Holman, R; Rey, S J; Rey, Soo-Jong

    1992-01-01

    We investigate the dynamics of monopole annihilation by the Langacker-Pi mechanism. We find taht considerations of causality, flux-tube energetics and the friction from Aharonov-Bohm scatteering suggest that the monopole annihilation is most efficient if electromagnetism is spontaneously broken at the lowest temperature ($T_{em} \\approx 10^6 GeV$) consistent with not having the monopoles dominate the energy density of the universe.

  6. Impact of distributed generation in the probability density of voltage sags; Impacto da geracao distribuida na densidade de probabilidade de afundamentos de tensao

    Energy Technology Data Exchange (ETDEWEB)

    Ramos, Alessandro Candido Lopes [CELG - Companhia Energetica de Goias, Goiania, GO (Brazil). Generation and Transmission. System' s Operation Center], E-mail: alessandro.clr@celg.com.br; Batista, Adalberto Jose [Universidade Federal de Goias (UFG), Goiania, GO (Brazil)], E-mail: batista@eee.ufg.br; Leborgne, Roberto Chouhy [Universidade Federal do Rio Grande do Sul (UFRS), Porto Alegre, RS (Brazil)], E-mail: rcl@ece.ufrgs.br; Emiliano, Pedro Henrique Mota, E-mail: ph@phph.com.br

    2009-07-01

    This article presents the impact of distributed generation in studies of voltage sags caused by faults in the electrical system. We simulated short-circuit-to-ground in 62 lines of 230, 138, 69 and 13.8 kV that are part of the electrical system of the city of Goiania, of Goias state . For each fault position was monitored the bus voltage of 380 V in an industrial consumer sensitive to such sags. Were inserted different levels of GD near the consumer. The simulations of a short circuit, with the monitoring bar 380 V, were performed again. A study using stochastic simulation Monte Carlo (SMC) was performed to obtain, at each level of GD, the probability curves and sags of the probability density and its voltage class. With these curves were obtained the average number of sags according to each class, that the consumer bar may be submitted annually. The simulations were performed using the Program Analysis of Simultaneous Faults - ANAFAS. In order to overcome the intrinsic limitations of the methods of simulation of this program and allow data entry via windows, a computational tool was developed in Java language. Data processing was done using the MATLAB software.

  7. Kinetics of Schottky defect formation and annihilation in single crystal TlBr.

    Science.gov (United States)

    Bishop, Sean R; Tuller, Harry L; Kuhn, Melanie; Ciampi, Guido; Higgins, William; Shah, Kanai S

    2013-07-28

    The kinetics for Schottky defect (Tl and Br vacancy pair) formation and annihilation in ionically conducting TlBr are characterized through a temperature induced conductivity relaxation technique. Near room temperature, defect generation-annihilation was found to take on the order of hours before equilibrium was reached after a step change in temperature, and that mechanical damage imparted on the sample rapidly increases this rate. The rate limiting step to Schottky defect formation-annihilation is identified as being the migration of lower mobility Tl (versus Br), with an estimate for source-sink density derived from calculated diffusion lengths. This study represents one of the first investigations of Schottky defect generation-annihilation kinetics and demonstrates its utility in quantifying detrimental mechanical damage in radiation detector materials.

  8. Hyperaccreting Disks around Neutrons Stars and Magnetars for GRBs: Neutrino Annihilation and Strong Magnetic Fields

    CERN Document Server

    Zhang, Dong

    2010-01-01

    Hyperaccreting disks around neutron stars or magnetars cooled via neutrino emission can be the potential central engine of GRBs. The neutron-star disk can cool more efficiently, produce much higher neutrino luminosity and neutrino annihilation luminosity than its black hole counterpart with the same accretion rate. The neutron star surface boundary layer could increase the annihilation luminosity as well. An ultra relativistic jet via neutrino annihilation can be produced along the stellar poles. Moreover, we investigate the effects of strong fields on the disks around magnetars. In general, stronger fields give higher disk densities, pressures, temperatures and neutrino luminosity; the neutrino annihilation mechanism and the magnetically-driven pulsar wind which extracts the stellar rotational energy can work together to generate and feed an even stronger ultra-relativistic jet along the stellar magnetic poles.

  9. Annihilating dark matter and the galactic positron excess

    CERN Document Server

    Maor, I

    2006-01-01

    The possibility that the Galactic dark matter is composed of neutralinos that are just above half the $Z^o$ mass is examined, in the context of the Galactic positron excess. In particular, we check if the anomalous bump in the cosmic ray positron to electron ratio at $10~GeV$ can be explained with the ``decay'' of virtual $Z^o$ bosons produced when the neutralinos annihilate. We find that the low energy behaviour of our prediction fits well the existing data. Assuming the neutralinos annihilate primarily in the distant density concentration in the Galaxy and allowing combination of older, diffused positrons with young free-streaming ones, produces a fit which is not satisfactory on its own but is significantly better than the one obtained with homogeneous injection.

  10. Dark Stars and Boosted Dark Matter Annihilation Rates

    CERN Document Server

    Ilie, Cosmin; Spolyar, Douglas

    2010-01-01

    Dark Stars (DS) may constitute the first phase of stellar evolution, powered by dark matter (DM) annihilation. We will investigate here the properties of DS assuming the DM particle has the required properties to explain the excess positron and elec- tron signals in the cosmic rays detected by the PAMELA and FERMI satellites. Any possible DM interpretation of these signals requires exotic DM candidates, with an- nihilation cross sections a few orders of magnitude higher than the canonical value required for correct thermal relic abundance for Weakly Interacting Dark Matter can- didates; additionally in most models the annihilation must be preferentially to lep- tons. Secondly, we study the dependence of DS properties on the concentration pa- rameter of the initial DM density profile of the halos where the first stars are formed. We restrict our study to the DM in the star due to simple (vs. extended) adiabatic contraction and minimal (vs. extended) capture; this simple study is sufficient to illustrate depend...

  11. High nuclear temperatures by antimatter-matter annihilation

    Energy Technology Data Exchange (ETDEWEB)

    Gibbs, W.R.; Strottman, D.

    1985-01-01

    It is suggested that the quark-gluon phase be created through the use of antiproton or antideuteron beams. The first advantage to this method, using higher energy antiprotons than 1.5 GeV/c, is that the higher momenta antiprotons penetrate more deeply so that mesons produced are more nearly contained within the nucleus. Another advantage is that the annihilation products are very forward-peaked and tend to form a beam of mesons so that the energy density does not disperse very rapidly. Calculations were performed using the intranuclear cascade to try to follow the process of annihilation in some detail. The intranuclear cascade type calculation method is compared to the hydrodynamic approach. 8 refs., 8 figs. (LEW)

  12. Distribution of Positron Annihilation Radiation

    CERN Document Server

    Milne, P A

    2006-01-01

    The SPI instrument on-board the ESA/INTEGRAL satellite is engaged in a mission-long study of positron annihilation radiation from the Galaxy. Early results suggest that the disk component is only weakly detected at 511 keV by SPI. We review CGRO/OSSE, TGRS and SMM studies of 511 keV line and positronium continuum emission from the Galaxy in light of the early INTEGRAL/SPI findings. We find that when similar spatial distributions are compared, combined fits to the OSSE/SMM/TGRS data-sets produce bulge and disk fluxes similar in total flux and in B/D ratio to the fits reported for SPI observations. We further find that the 511 keV line width reported by SPI is similar to the values reported by TGRS, particularly when spectral fits include both narrow-line and broad-line components. Collectively, the consistency between these four instruments suggests that all may be providing an accurate view of positron annihilation in the Galaxy.

  13. Application of a maximum entropy method to estimate the probability density function of nonlinear or chaotic behavior in structural health monitoring data

    Science.gov (United States)

    Livingston, Richard A.; Jin, Shuang

    2005-05-01

    Bridges and other civil structures can exhibit nonlinear and/or chaotic behavior under ambient traffic or wind loadings. The probability density function (pdf) of the observed structural responses thus plays an important role for long-term structural health monitoring, LRFR and fatigue life analysis. However, the actual pdf of such structural response data often has a very complicated shape due to its fractal nature. Various conventional methods to approximate it can often lead to biased estimates. This paper presents recent research progress at the Turner-Fairbank Highway Research Center of the FHWA in applying a novel probabilistic scaling scheme for enhanced maximum entropy evaluation to find the most unbiased pdf. The maximum entropy method is applied with a fractal interpolation formulation based on contraction mappings through an iterated function system (IFS). Based on a fractal dimension determined from the entire response data set by an algorithm involving the information dimension, a characteristic uncertainty parameter, called the probabilistic scaling factor, can be introduced. This allows significantly enhanced maximum entropy evaluation through the added inferences about the fine scale fluctuations in the response data. Case studies using the dynamic response data sets collected from a real world bridge (Commodore Barry Bridge, PA) and from the simulation of a classical nonlinear chaotic system (the Lorenz system) are presented in this paper. The results illustrate the advantages of the probabilistic scaling method over conventional approaches for finding the unbiased pdf especially in the critical tail region that contains the larger structural responses.

  14. Deduction and Validation of an Eulerian-Eulerian Model for Turbulent Dilute Two-Phase Flows by Means of the Phase Indicator Function Disperse Elements Probability Density Function

    Institute of Scientific and Technical Information of China (English)

    SantiagoLain; RicardoAliod

    2000-01-01

    A statistical formalism overcoming some conceptual and practical difficulties arising in existing two-phase flow (2PHF) mathematical modelling has been applied to propose a model for dilute 2PHF turbulent flows.Phase interaction terms with a clear physical meaning enter the equations and the formalism provides some guidelines for the avoidance of closure assumptions or the rational approximation of these terms. Continuous phase averaged continuity, momentum, turbulent kinetic energy and turbulence dissipation rate equations have been rigorously and systematically obtained in a single step. These equations display a structure similar to that for single-phase flows.It is also assumed that dispersed phase dynamics is well described by a probability density function (pdf) equation and Eulerian continuity, momentum and fluctuating kinetic energy equations for the dispersed phase are deduced.An extension of the standard k-c turbulence model for the continuous phase is used. A gradient transport model is adopted for the dispersed phase fluctuating fluxes of momentum and kinetic energy at the non-colliding, large inertia limit. This model is then used to predict the behaviour of three axisymmetric turbulent jets of air laden with solid particles varying in size and concentration. Qualitative and quantitative numerical predictions compare reasonably well with the three different sets of experimental results, studying the influence of particle size, loading ratio and flow confinement velocity.

  15. Time-averaged probability density functions of soot nanoparticles along the centerline of a piloted turbulent diffusion flame using a scanning mobility particle sizer

    KAUST Repository

    Chowdhury, Snehaunshu

    2017-01-23

    In this study, we demonstrate the use of a scanning mobility particle sizer (SMPS) as an effective tool to measure the probability density functions (PDFs) of soot nanoparticles in turbulent flames. Time-averaged soot PDFs necessary for validating existing soot models are reported at intervals of ∆x/D∆x/D = 5 along the centerline of turbulent, non-premixed, C2H4/N2 flames. The jet exit Reynolds numbers of the flames investigated were 10,000 and 20,000. A simplified burner geometry based on a published design was chosen to aid modelers. Soot was sampled directly from the flame using a sampling probe with a 0.5-mm diameter orifice and diluted with N2 by a two-stage dilution process. The overall dilution ratio was not evaluated. An SMPS system was used to analyze soot particle concentrations in the diluted samples. Sampling conditions were optimized over a wide range of dilution ratios to eliminate the effect of agglomeration in the sampling probe. Two differential mobility analyzers (DMAs) with different size ranges were used separately in the SMPS measurements to characterize the entire size range of particles. In both flames, the PDFs were found to be mono-modal in nature near the jet exit. Further downstream, the profiles were flatter with a fall-off at larger particle diameters. The geometric mean of the soot size distributions was less than 10 nm for all cases and increased monotonically with axial distance in both flames.

  16. Influence of Disassociation Probability on External Quantum Efficiency in Organic Electrophosphorescent Devices

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jian-hua; OU YANG Jun; LI Xue-yong; LI Hong-jian

    2007-01-01

    An analytical model is presented to calculate the disassociation probability and the external quantum efficiency at high field in doped organic electrophosphorescence(EPH) devices. The charge recombination process and the triplet(T)-triplet(T) annihilation processes are taken into account in this model. The influences of applied voltage and the thickness of the device on the disassociation probability, and of current density and the thickness of the device on the external quantum efficiency are studied thoroughly by including and ignoring the disassociation of excitons. It is found that the dissociation probability of excitons will come close to 1 at high electric field, and the external EPH quantum efficiency is almost the same at low electric field. There is a large discrepancy of the external EPH quantum efficiency at high electric field for including or ignoring the disassociation of excitons.

  17. Probability in quantum mechanics

    Directory of Open Access Journals (Sweden)

    J. G. Gilson

    1982-01-01

    Full Text Available By using a fluid theory which is an alternative to quantum theory but from which the latter can be deduced exactly, the long-standing problem of how quantum mechanics is related to stochastic processes is studied. It can be seen how the Schrödinger probability density has a relationship to time spent on small sections of an orbit, just as the probability density has in some classical contexts.

  18. On annihilators in BL-algebras

    Directory of Open Access Journals (Sweden)

    Zou Yu Xi

    2016-01-01

    Full Text Available In the paper, we introduce the notion of annihilators in BL-algebras and investigate some related properties of them. We get that the ideal lattice (I(L, ⊆ is pseudo-complemented, and for any ideal I, its pseudo-complement is the annihilator I⊥ of I. Also, we define the An (L to be the set of all annihilators of L, then we have that (An(L; ⋂,∧An(L,⊥,{0}, L is a Boolean algebra. In addition, we introduce the annihilators of a nonempty subset X of L with respect to an ideal I and study some properties of them. As an application, we show that if I and J are ideals in a BL-algebra L, then JI⊥$J_I^ \\bot $ is the relative pseudo-complement of J with respect to I in the ideal lattice (I(L, ⊆. Moreover, we get some properties of the homomorphism image of annihilators, and also give the necessary and sufficient condition of the homomorphism image and the homomorphism pre-image of an annihilator to be an annihilator. Finally, we introduce the notion of α-ideal and give a notation E(I . We show that (E(I(L, ∧E, ∨E, E(0, E(L is a pseudo-complemented lattice, a complete Brouwerian lattice and an algebraic lattice, when L is a BL-chain or a finite product of BL-chains.

  19. Muon Fluxes From Dark Matter Annihilation

    CERN Document Server

    Erkoca, Arif Emre; Sarcevic, Ina

    2009-01-01

    We calculate the muon flux from annihilation of the dark matter in the core of the Sun, in the core of the Earth and from cosmic diffuse neutrinos produced in dark matter annihilation in the halos. We consider model-independent direct neutrino production and secondary neutrino production from the decay of taus produced in the annihilation of dark matter. We illustrate how muon energy distribution from dark matter annihilation has a very different shape than muon flux from atmospheric neutrinos. We consider both the upward muon flux, when muons are created in the rock below the detector, and the contained flux when muons are created in the (ice) detector. We contrast our results to the ones previously obtained in the literature, illustrating the importance of properly treating muon propagation and energy loss. We comment on neutrino flavor dependence and their detection.

  20. Pair annihilation in superstrong magnetic fields

    Science.gov (United States)

    Daugherty, J. K.; Bussard, R. W.

    1980-01-01

    The kinematical and dynamical aspects of the annihilation processes in superstrong magnetic fields are studied. The feasibility and potential significance of detecting from magnetic neutron stars are discussed. The discussion proceeds from the derivation of the fully relativistic differential cross sections and annihilation rates for both one- and two-photon emission from a ground-state gas of electrons and positrons in a static, uniform magnetic field.

  1. Ruin probabilities

    DEFF Research Database (Denmark)

    Asmussen, Søren; Albrecher, Hansjörg

    The book gives a comprehensive treatment of the classical and modern ruin probability theory. Some of the topics are Lundberg's inequality, the Cramér-Lundberg approximation, exact solutions, other approximations (e.g., for heavy-tailed claim size distributions), finite horizon ruin probabilities......, extensions of the classical compound Poisson model to allow for reserve-dependent premiums, Markov-modulation, periodicity, change of measure techniques, phase-type distributions as a computational vehicle and the connection to other applied probability areas, like queueing theory. In this substantially...

  2. Quantum probability

    CERN Document Server

    Gudder, Stanley P

    2014-01-01

    Quantum probability is a subtle blend of quantum mechanics and classical probability theory. Its important ideas can be traced to the pioneering work of Richard Feynman in his path integral formalism.Only recently have the concept and ideas of quantum probability been presented in a rigorous axiomatic framework, and this book provides a coherent and comprehensive exposition of this approach. It gives a unified treatment of operational statistics, generalized measure theory and the path integral formalism that can only be found in scattered research articles.The first two chapters survey the ne

  3. Positron annihilation with core and valence electrons

    CERN Document Server

    Green, D G

    2015-01-01

    $\\gamma$-ray spectra for positron annihilation with the core and valence electrons of the noble gas atoms Ar, Kr and Xe is calculated within the framework of diagrammatic many-body theory. The effect of positron-atom and short-range positron-electron correlations on the annihilation process is examined in detail. Short-range correlations, which are described through non-local corrections to the vertex of the annihilation amplitude, are found to significantly enhance the spectra for annihilation on the core orbitals. For Ar, Kr and Xe, the core contributions to the annihilation rate are found to be 0.55\\%, 1.5\\% and 2.2\\% respectively, their small values reflecting the difficulty for the positron to probe distances close to the nucleus. Importantly however, the core subshells have a broad momentum distribution and markedly contribute to the annihilation spectra at Doppler energy shifts $\\gtrsim3$\\,keV, and even dominate the spectra of Kr and Xe at shifts $\\gtrsim5$\\,keV. Their inclusion brings the theoretical ...

  4. PIMC Simulation of Ps Annihilation: From Micro to Mesopores

    Energy Technology Data Exchange (ETDEWEB)

    Bug, A R; Sterne, P A

    2005-08-23

    Path Integral Monte Carlo (PIMC) can reproduce the results of simple analytical calculations in which a single quantum particle is used to represent positronium within an idealized, spherical pore. Our calculations improve on this approach by explicitly treating the positronium as a two-particle e{sup -}, e{sup +} system interacting via the Coulomb interaction. We study the lifetime and the internal contact density, {kappa}, which controls the self-annihilation behavior, for positronium in model spherical pores, as a function of temperature and pore size. We compare the results with both PIMC and analytical calculations for a single-particle model.

  5. Pair annihilation into neutrinos in strong magnetic fields.

    Science.gov (United States)

    Canuto, V.; Fassio-Canuto, L.

    1973-01-01

    Among the processes that are of primary importance for the thermal history of a neutron star is electron-positron annihilation into neutrinos and photoneutrinos. These processes are computed in the presence of a strong magnetic field typical of neutron stars, and the results are compared with the zero-field case. It is shown that the neutrino luminosity Q(H) is greater than Q(O) for temperatures up to T about equal to 3 x 10 to the 8th power K and densities up to 1,000,000 g/cu cm.

  6. Upper Bounds on Asymmetric Dark Matter Self Annihilation Cross Sections

    CERN Document Server

    Ellwanger, Ulrich

    2012-01-01

    Most models for asymmetric dark matter allow for dark matter self annihilation processes, which can wash out the asymmetry at temperatures near and below the dark matter mass. We study the coupled set of Boltzmann equations for the symmetric and antisymmetric dark matter number densities, and derive conditions applicable to a large class of models for the absence of a significant wash-out of an asymmetry. These constraints are applied to various existing scenarios. In the case of left- or right-handed sneutrinos, very large electroweak gaugino masses, or very small mixing angles are required.

  7. Probability-1

    CERN Document Server

    Shiryaev, Albert N

    2016-01-01

    This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, the measure-theoretic foundations of probability theory, weak convergence of probability measures, and the central limit theorem. Many examples are discussed in detail, and there are a large number of exercises. The book is accessible to advanced undergraduates and can be used as a text for independent study. To accommodate the greatly expanded material in the third edition of Probability, the book is now divided into two volumes. This first volume contains updated references and substantial revisions of the first three chapters of the second edition. In particular, new material has been added on generating functions, the inclusion-exclusion principle, theorems on monotonic classes (relying on a detailed treatment of “π-λ” systems), and the fundamental theorems of mathematical statistics.

  8. The dark matter annihilation boost from low-temperature reheating

    Science.gov (United States)

    Erickcek, Adrienne L.

    2015-11-01

    The evolution of the Universe between inflation and the onset of big bang nucleosynthesis is difficult to probe and largely unconstrained. This ignorance profoundly limits our understanding of dark matter: we cannot calculate its thermal relic abundance without knowing when the Universe became radiation dominated. Fortunately, small-scale density perturbations provide a probe of the early Universe that could break this degeneracy. If dark matter is a thermal relic, density perturbations that enter the horizon during an early matter-dominated era grow linearly with the scale factor prior to reheating. The resulting abundance of substructure boosts the annihilation rate by several orders of magnitude, which can compensate for the smaller annihilation cross sections that are required to generate the observed dark matter density in these scenarios. In particular, thermal relics with masses less than a TeV that thermally and kinetically decouple prior to reheating may already be ruled out by Fermi-LAT observations of dwarf spheroidal galaxies. Although these constraints are subject to uncertainties regarding the internal structure of the microhalos that form from the enhanced perturbations, they open up the possibility of using gamma-ray observations to learn about the reheating of the Universe.

  9. 基于RBF神经网络分位数回归的电力负荷概率密度预测方法%A Power Load Probability Density Forecasting Method Based on RBF Neural Network Quantile Regression

    Institute of Scientific and Technical Information of China (English)

    何耀耀; 许启发; 杨善林; 余本功

    2013-01-01

    According to the problem of short-term load forecasting in the power system, this paper proposed a probability density forecasting method using radical basis function (RBF) neural network quantile regression based on the existed researches on combination forecasting and probability interval prediction. The probability density function of load at any period in a day was evaluated. The proposed method can obtain more useful information than point prediction and interval prediction, and can implement the whole probability distribution forecasting for future load. The practical data of a city in China show that the proposed probability density forecasting method can gain more accurate result of point prediction and obtain the forecasting results of integrated probability density function of short-term load.%针对电力系统短期负荷预测问题,在现有的组合预测和概率性区间预测的基础上,提出了基于RBF神经网络分位数回归的概率密度预测方法,得出未来一天中任意时期负荷的概率密度函数,可以得到比点预测和区间预测更多的有用信息,实现了对未来负荷完整概率分布的预测.中国某市实际数据的预测结果表明,提出的概率密度预测方法不仅能得出较为精确的点预测结果,而且能够获得短期负荷完整的概率密度函数预测结果.

  10. Lexicographic Probability, Conditional Probability, and Nonstandard Probability

    Science.gov (United States)

    2009-11-11

    the following conditions: CP1. µ(U |U) = 1 if U ∈ F ′. CP2 . µ(V1 ∪ V2 |U) = µ(V1 |U) + µ(V2 |U) if V1 ∩ V2 = ∅, U ∈ F ′, and V1, V2 ∈ F . CP3. µ(V |U...µ(V |X)× µ(X |U) if V ⊆ X ⊆ U , U,X ∈ F ′, V ∈ F . Note that it follows from CP1 and CP2 that µ(· |U) is a probability measure on (W,F) (and, in... CP2 hold. This is easily seen to determine µ. Moreover, µ vaciously satisfies CP3, since there do not exist distinct sets U and X in F ′ such that U

  11. Neutrino Fluxes from NUHM LSP Annihilations in the Sun

    Energy Technology Data Exchange (ETDEWEB)

    Olive, Keith

    2011-08-12

    We extend our previous studies of the neutrino fluxes expected from neutralino LSP annihilations inside the Sun to include variants of the minimal supersymmetric extension of the Standard Model (MSSM) with squark, slepton and gaugino masses constrained to be universal at the GUT scale, but allowing one or two non-universal supersymmetry-breaking parameters contributing to the Higgs masses (NUHM1,2). As in the constrained MSSM (CMSSM) with universal Higgs masses, there are large regions of the NUHM parameter space where the LSP density inside the Sun is not in equilibrium, so that the annihilation rate may be far below the capture rate, and there are also large regions where the capture rate is not dominated by spin-dependent LSP-proton scattering. The spectra possible in the NUHM are qualitatively similar to those in the CMSSM. We calculate neutrino-induced muon fluxes above a threshold energy of 10 GeV, appropriate for the IceCube/DeepCore detector, for points where the NUHM yields the correct cosmological relic density for representative choices of the NUHM parameters. We find that the IceCube/DeepCore detector can probe regions of the NUHM parameter space in addition to analogues of the focus-point strip and the tip of the coannihilation strip familiar from the CMSSM. These include regions with enhanced Higgsino-gaugino mixing in the LSP composition, that occurs where neutralino mass eigenstates cross over. On the other hand, rapid-annihilation funnel regions in general yield neutrino fluxes that are unobservably small.

  12. Resonant two-photon annihilation of an electron-positron pair in a pulsed electromagnetic wave

    Science.gov (United States)

    Voroshilo, A. I.; Roshchupkin, S. P.; Nedoreshta, V. N.

    2016-09-01

    Two-photon annihilation of an electron-positron pair in the field of a plane low-intensity circularly polarized pulsed electromagnetic wave was studied. The conditions for resonance of the process which are related to an intermediate particle that falls within the mass shell are studied. In the resonant approximation the probability of the process was obtained. It is demonstrated that the resonant probability of two-photon annihilation of an electron-positron pair may be several orders of magnitude higher than the probability of this process in the absence of the external field. The obtained results may be experimentally verified by the laser facilities of the international megaprojects, for example, SLAC (National Accelerator Laboratory), FAIR (Facility for Antiproton and Ion Research), and XFEL (European X-Ray Free-Electron Laser).

  13. Risk Probabilities

    DEFF Research Database (Denmark)

    Rojas-Nandayapa, Leonardo

    Tail probabilities of sums of heavy-tailed random variables are of a major importance in various branches of Applied Probability, such as Risk Theory, Queueing Theory, Financial Management, and are subject to intense research nowadays. To understand their relevance one just needs to think...... of insurance companies facing losses due to natural disasters, banks seeking protection against huge losses, failures in expensive and sophisticated systems or loss of valuable information in electronic systems. The main difficulty when dealing with this kind of problems is the unavailability of a closed...

  14. Decaying vs annihilating dark matter in light of a tentative gamma-ray line

    Energy Technology Data Exchange (ETDEWEB)

    Buchmueller, Wilfried; Garny, Mathias

    2012-06-15

    Recently reported tentative evidence for a gamma-ray line in the Fermi-LAT data is of great potential interest for identifying the nature of dark matter. We compare the implications for decaying and annihilating dark matter taking the constraints from continuum gamma-rays, antiproton flux and morphology of the excess into account. We find that higgsino and wino dark matter are excluded, also for nonthermal production. Generically, the continuum gamma-ray ux severely constrains annihilating dark matter. Consistency of decaying dark matter with the spatial distribution of the Fermi-LAT excess would require an enhancement of the dark matter density near the Galactic center.

  15. SUSY-QCD corrections to stop annihilation into electroweak final states including Coulomb enhancement effects

    Science.gov (United States)

    Harz, J.; Herrmann, B.; Klasen, M.; Kovařík, K.; Meinecke, M.

    2015-02-01

    We present the full O (αs) supersymmetric QCD corrections for stop-antistop annihilation into electroweak final states within the Minimal Supersymmetric Standard Model. We also incorporate Coulomb corrections due to gluon exchange between the incoming stops. Numerical results for the annihilation cross sections and the predicted neutralino relic density are presented. We show that the impact of the radiative corrections on the cosmologically preferred region of the parameter space can become larger than the current experimental uncertainty, shifting the relic bands within the considered regions of the parameter space by up to a few tens of GeV.

  16. Supersymmetric QCD effects on neutralino dark matter annihilation beyond scalar or gaugino mass unification

    Science.gov (United States)

    Herrmann, Björn; Klasen, Michael; Kovařík, Karol

    2009-10-01

    We describe in detail our calculation of the full supersymmetric QCD corrections to neutralino annihilation into heavy quarks and extend our numerical analysis of the resulting dark matter relic density to scenarios without scalar or gaugino mass unification. In these scenarios, the final state is often composed of top quarks and the annihilation proceeds through Z0-boson or scalar top-quark exchanges. The impact of the corrections is again shown to be sizable, so that they must be taken into account systematically in global analyses of the supersymmetry parameter space.

  17. SUSY-QCD effects on neutralino dark matter annihilation beyond scalar or gaugino mass unification

    CERN Document Server

    Herrmann, Bjorn; Kovarik, Karol

    2009-01-01

    We describe in detail our calculation of the full supersymmetric (SUSY) QCD corrections to neutralino annihilation into heavy quarks and extend our numerical analysis of the resulting dark matter relic density to scenarios without scalar or gaugino mass unification. In these scenarios, the final state is often composed of top quarks and the annihilation proceeds through Z^0-boson or scalar top-quark exchanges. The impact of the corrections is again shown to be sizable, so that they must be taken into account systematically in global analyses of the supersymmetry parameter space.

  18. SUSY-QCD corrections to stop annihilation into electroweak final states including Coulomb enhancement effects

    CERN Document Server

    Harz, J; Klasen, M; Kovařík, K; Meinecke, M

    2014-01-01

    We present the full $\\mathcal{O}(\\alpha_s)$ supersymmetric QCD corrections for stop-anti-stop annihilation into electroweak final states within the Minimal Supersymmetric Standard Model (MSSM). We also incorporate Coulomb corrections due to gluon exchange between the incoming stops. Numerical results for the annihilation cross sections and the predicted neutralino relic density are presented. We show that the impact of the radiative corrections on the cosmologically preferred region of the parameter space can become larger than the current experimental uncertainty, shifting the relic bands within the considered regions of the parameter space by up to a few tens of GeV.

  19. Positron annihilation lifetime characterization of oxygen ion irradiated rutile TiO{sub 2}

    Energy Technology Data Exchange (ETDEWEB)

    Luitel, Homnath [Variable Energy Cyclotron Centre, 1/AF Bidhannagar, Kolkata 700064 (India); Sarkar, A. [Department of Physics, Bangabasi Morning College, 19 Rajkumar Chakraborty Sarani, Kolkata 700009 (India); Chakrabarti, Mahuya [Department of Physics, Acharya Prafulla Chandra College, New Barrackpore, Kolkata 700131 (India); Chattopadhyay, S. [Department of Physics, Maulana Azad College, 8 Rafi Ahmed Kidwai Road, Kolkata 700013 (India); Asokan, K. [Inter University Accelerator Centre, Aruna Asaf Ali Marg, New Delhi 110067 (India); Sanyal, D., E-mail: dirtha@vecc.gov.in [Variable Energy Cyclotron Centre, 1/AF Bidhannagar, Kolkata 700064 (India)

    2016-07-15

    Ferromagnetic ordering at room temperature has been induced in rutile phase of TiO{sub 2} polycrystalline sample by O ion irradiation. 96 MeV O ion induced defects in rutile TiO{sub 2} sample has been characterized by positron annihilation spectroscopic techniques. Positron annihilation results indicate the formation of cation vacancy (V{sub Ti}, Ti vacancy) in these irradiated TiO{sub 2} samples. Ab initio density functional theoretical calculations indicate that in TiO{sub 2} magnetic moment can be induced either by creating Ti or O vacancies.

  20. An automated technique for most-probable-number (MPN) analysis of densities of phagotrophic protists with lux-AB labelled bacteria as growth medium

    DEFF Research Database (Denmark)

    Ekelund, Flemming; Christensen, Søren; Rønn, Regin

    1999-01-01

    An automated modification of the most-probable-number (MPN) technique has been developed for enumeration of phagotrophic protozoa. The method is based on detection of prey depletion in micro titre plates rather than on presence of protozoa. A transconjugant Pseudomonas fluorescens DR54 labelled w...

  1. Neutralino-stop co-annihilation into electroweak gauge and Higgs bosons at one loop

    Energy Technology Data Exchange (ETDEWEB)

    Harz, J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Herrmann, B. [Univ. Savoie/CNRS, Annecy-le-Vieux (France). LAPTh; Klasen, M. [Muenster Univ. (Germany). Inst. fuer Theoretische Physik 1; Kovarik, K. [Karlsruhe Institute of Technology, Karlsruhe (Germany). Inst. for Theoretical Physics; Le Boulc' h, Q. [Grenoble Univ. (France). CNRS-IN2P3/INPG

    2012-12-15

    We compute the full O({alpha}{sub s}) supersymmetric QCD corrections for neutralino-stop co-annihilation into electroweak gauge and Higgs bosons in the Minimal Supersymmetric Standard Model (MSSM). We show that these annihilation channels are phenomenologically relevant within the so-called phenomenological MSSM, in particular in the light of the observation of a Higgs-like particle with a mass of about 126 GeV at the LHC. We present in detail our calculation, including the renormalization scheme, the infrared treatment, and the kinematical subtleties to be addressed. Numerical results for the co-annihilation cross sections and the predicted neutralino relic density are presented. We demonstrate that the impact of including the corrections on the cosmologically preferred region of parameter space is larger than the current experimental uncertainty from WMAP data.

  2. One-loop corrections to gaugino (co-)annihilation into quarks in the MSSM

    CERN Document Server

    Herrmann, B; Kovarik, K; Meinecke, M; Steppeler, P

    2014-01-01

    We present the full $\\mathcal{O}(\\alpha_s)$ supersymmetric QCD corrections for gaugino annihilation and co-annihilation into light and heavy quarks in the Minimal Supersymmetric Standard Model (MSSM). We demonstrate that these channels are phenomenologically relevant within the so-called phenomenological MSSM. We discuss selected technical details such as the dipole subtraction method in the case of light quarks and the treatment of the bottom quark mass and Yukawa coupling. Numerical results for the (co-)annihilation cross sections and the predicted neutralino relic density are presented. We show that the impact of including the radiative corrections on the cosmologically preferred region of the parameter space is larger than the current experimental uncertainty from Planck data.

  3. Neutralino-stop co-annihilation into electroweak gauge and Higgs bosons at one loop

    CERN Document Server

    Harz, J; Klasen, M; Kovarik, K; Boulc'h, Q Le

    2012-01-01

    We compute the full O(alpha_s) supersymmetric QCD corrections for neutralino-stop co-annihilation into electroweak gauge and Higgs bosons in the Minimal Supersymmetric Standard Model (MSSM). We show that these annihilation channels are phenomenologically relevant within the so-called phenomenological MSSM, in particular in the light of the observation of a Higgs-like particle with a mass of about 126 GeV at the LHC. We present in detail our calculation, including the renormalization scheme, the infrared treatment, and the kinematical subtleties to be addressed. Numerical results for the co-annihilation cross sections and the predicted neutralino relic density are presented. We demonstrate that the impact of including the corrections on the cosmologically preferred region of parameter space is larger than the current experimental uncertainty from WMAP data.

  4. Observational Constraints of 30–40 GeV Dark Matter Annihilation in Galaxy Clusters

    Directory of Open Access Journals (Sweden)

    Man Ho Chan

    2016-01-01

    Full Text Available Recently, it has been shown that the annihilation of 30–40 GeV dark matter particles through bb- channel can satisfactorily explain the excess GeV gamma-ray spectrum near the Galactic Center. In this paper, we apply the above model to galaxy clusters and use the latest upper limits of gamma-ray flux derived from Fermi-LAT data to obtain an upper bound of the annihilation cross section of dark matter. By considering the extended density profiles and the cosmic ray profile models of 49 galaxy clusters, the upper bound of the annihilation cross section can be further tightened to σv≤9×10-26 cm3 s−1. This result is consistent with the one obtained from the data near the Galactic Center.

  5. Searching for neutrinos from dark matter annihilations in (dwarf) galaxies and clusters with IceCube

    Energy Technology Data Exchange (ETDEWEB)

    With, Meike de [Institut fuer Physik, Humboldt-Universitaet zu Berlin, D-12489 Berlin (Germany); Bernardini, Elisa [DESY, D-15735 Zeuthen (Germany); Collaboration: IceCube-Collaboration

    2015-07-01

    In many models, the self-annihilation of dark matter particles will create neutrinos which can be detected on Earth. An excess flux of these neutrinos is expected from regions of increased dark matter density, like (dwarf) galaxies and galaxy clusters. The IceCube neutrino observatory, a cubic-kilometer neutrino detector at the South Pole, is capable of detecting neutrinos down to energies of few 10 GeV and is therefore able to constrain the self-annihilation cross section as a function of the mass of the dark matter particle. In this talk, the current status of the search for neutrinos from dark matter annihilations in (dwarf) galaxies and galaxy clusters with IceCube is discussed.

  6. SUSY-QCD corrections to the (co)annihilation of neutralino dark matter within the MSSM

    Energy Technology Data Exchange (ETDEWEB)

    Meinecke, Moritz

    2015-06-15

    Based on experimental observations, it is nowadays assumed that a large component of the matter content in the universe is comprised of so-called cold dark matter. Furthermore, latest measurements of the temperature fluctuations of the cosmic microwave background provided an estimation of the dark matter relic density at a measurement error of one percent (concerning the experimental 1σ-error). The lightest neutralino χ 0{sub 1}, a particle which subsumes under the phenomenologically interesting category of weakly interacting massive particles, is a viable dark matter candidate for many supersymmetric (SUSY) models whose relic density Ω{sub χ} {sub 0{sub 1}} happens to lie quite naturally within the experimentally favored ballpark of dark matter. The high experimental precision can be used to constrain the SUSY parameter space to its cosmologically favored regions and to pin down phenomenologically interesting scenarios. However, to actually benefit from this progress on the experimental side it is also mandatory to minimize the theoretical uncertainties. An important quantity within the calculation of the neutralino relic density is the thermally averaged sum over different annihilation and coannihilation cross sections of the neutralino and further supersymmetric particles. It is now assumed and also partly proven that these cross sections can be subject to large loop corrections which can even shift the associated Ω{sub χ} {sub 0{sub 1}} by a factor larger than the current experimental error. However, most of these corrections are yet unknown. In this thesis, we calculate higher-order corrections for some of the most important (co)annihilation channels both within the framework of the R-parity conserving Minimal Supersymmetric Standard Model (MSSM) and investigate their impact on the final neutralino relic density Ω{sub χ} {sub 0{sub 1}}. More precisely, this work provides the full O(α{sub s}) corrections of supersymmetric quantum chromodynamics (SUSY

  7. Application of tests of goodness of fit in determining the probability density function for spacing of steel sets in tunnel support system

    Directory of Open Access Journals (Sweden)

    Farnoosh Basaligheh

    2015-12-01

    Full Text Available One of the conventional methods for temporary support of tunnels is to use steel sets with shotcrete. The nature of a temporary support system demands a quick installation of its structures. As a result, the spacing between steel sets is not a fixed amount and it can be considered as a random variable. Hence, in the reliability analysis of these types of structures, the selection of an appropriate probability distribution function of spacing of steel sets is essential. In the present paper, the distances between steel sets are collected from an under-construction tunnel and the collected data is used to suggest a proper Probability Distribution Function (PDF for the spacing of steel sets. The tunnel has two different excavation sections. In this regard, different distribution functions were investigated and three common tests of goodness of fit were used for evaluation of each function for each excavation section. Results from all three methods indicate that the Wakeby distribution function can be suggested as the proper PDF for spacing between the steel sets. It is also noted that, although the probability distribution function for two different tunnel sections is the same, the parameters of PDF for the individual sections are different from each other.

  8. Monopole annihilation at the electroweak scale

    CERN Document Server

    Terning, J

    1992-01-01

    We examine the issue of monopole annihilation at the electroweak scale induced by flux tube confinement, concentrating first on the simplest possibility---one which requires no new physics beyond the standard model. Monopoles existing at the time of the electroweak phase transition may trigger $W$ condensation which can confine magnetic flux into flux tubes. However we show on very general grounds, using several independent estimates, that such a mechanism is impotent. We then present several general dynamical arguments constraining the possibility of monopole annihilation through any confining phase near the electroweak scale.

  9. A compact positron annihilation lifetime spectrometer

    Institute of Scientific and Technical Information of China (English)

    LI Dao-Wu; LIU Jun-Hui; ZHANG Zhi-Ming; WANG Bao-Yi; ZHANG Tian-Bao; WEI Long

    2011-01-01

    Using LYSO scintillator coupled on HAMAMATSU R9800(a fast photomultiplier)to form the small size γ-ray detectors,a compact lifetime spectrometer has been built for the positron annihilation experiments.The system time resolution FWHM=193 ps and the coincidence counting rate -8 cps/μCi were achieved.A lifetime value of 219±1 ps of positron annihilation in well annealed Si was tested,which is in agreement with the typical values published in the previous lectures.

  10. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    Science.gov (United States)

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature.

  11. Probability of Transmission of Malaria from Mosquito to Human Is Regulated by Mosquito Parasite Density in Naïve and Vaccinated Hosts

    Science.gov (United States)

    Sinden, Robert E.; Poulton, Ian D.; Griffin, Jamie T.; Upton, Leanna M.; Sala, Katarzyna A.; Angrisano, Fiona; Hill, Adrian V. S.; Blagborough, Andrew M.

    2017-01-01

    Over a century since Ronald Ross discovered that malaria is caused by the bite of an infectious mosquito it is still unclear how the number of parasites injected influences disease transmission. Currently it is assumed that all mosquitoes with salivary gland sporozoites are equally infectious irrespective of the number of parasites they harbour, though this has never been rigorously tested. Here we analyse >1000 experimental infections of humans and mice and demonstrate a dose-dependency for probability of infection and the length of the host pre-patent period. Mosquitoes with a higher numbers of sporozoites in their salivary glands following blood-feeding are more likely to have caused infection (and have done so quicker) than mosquitoes with fewer parasites. A similar dose response for the probability of infection was seen for humans given a pre-erythrocytic vaccine candidate targeting circumsporozoite protein (CSP), and in mice with and without transfusion of anti-CSP antibodies. These interventions prevented infection more efficiently from bites made by mosquitoes with fewer parasites. The importance of parasite number has widespread implications across malariology, ranging from our basic understanding of the parasite, how vaccines are evaluated and the way in which transmission should be measured in the field. It also provides direct evidence for why the only registered malaria vaccine RTS,S was partially effective in recent clinical trials. PMID:28081253

  12. Electronic Structure of Rare-Earth Metals. II. Positron Annihilation

    DEFF Research Database (Denmark)

    Williams, R. W.; Mackintosh, Allan

    1968-01-01

    The angular correlation of the photons emitted when positrons annihilate with electrons has been studied in single crystals of the rare-earth metals Y, Gd, Tb, Dy, Ho, and Er, and in a single crystal of an equiatomic alloy of Ho and Er. A comparison of the results for Y with the calculations...... of Loucks shows that the independent-particle model gives a good first approximation to the angular distribution, although correlation effects probably smear out some of the structure. The angular distributions from the heavy rare-earth metals are very similar to that from Y and can be understood...... qualitatively in terms of the relativistic augmented-plane-wave calculations by Keeton and Loucks. The angular distributions in the c direction in the paramagnetic phases are characterized by a rapid drop at low angles followed by a hump, and these features are associated with rather flat regions of Fermi...

  13. A compact positron annihilation lifetime spectrometer

    Institute of Scientific and Technical Information of China (English)

    李道武; 刘军辉; 章志明; 王宝义; 张天保; 魏龙

    2011-01-01

    Using LYSO scintillator coupled on HAMAMATSU R9800 (a fast photomultiplier) to form the small size γ-ray detectors, a compact lifetime spectrometer has been built for the positron annihilation experiments. The system time resolution FWHM=193 ps and the co

  14. A positron annihilation study of hydrated DNA

    DEFF Research Database (Denmark)

    Warman, J. M.; Eldrup, Morten Mostgaard

    1986-01-01

    Positron annihilation measurements are reported for hydrated DNA as a function of water content and as a function of temperature (20 to -180.degree. C) for samples containing 10 and 50% wt of water. The ortho-positronium mean lifetime and its intensity show distinct variations with the degree...

  15. Antihydrogen annihilation reconstruction with the ALPHA silicon detector

    CERN Document Server

    Andresen, G B; Bertsche, W; Bowe, P D; Butler, E; Cesar, C L; Chapman, S; Charlton, M; Deller, A; Eriksson, S; Fajans, J; Friesen, T; Fujiwara, M C; Gill, D.R; Gutierrez, A; Hangst, J S; Hardy, W N; Hayden, M E; Hayano, R S; Humphries, A J; Hydomako, R; Jonsell, S; Jorgensen, L V; Kurchaninov, L; Madsen, N; Menary, S; Nolan, P; Olchanski, K; Olin, A; Povilus, A; Pusa, P; Sarid, E; Seif el Nasr, S; Silveira, D M; So, C; Storey, J W; Thompson, R I; van der Werf, D P; Yamazaki, Y

    2012-01-01

    The ALPHA experiment has succeeded in trapping antihydrogen, a major milestone on the road to spectroscopic comparisons of antihydrogen with hydrogen. An annihilation vertex detector, which determines the time and position of antiproton annihilations, has been central to this achievement. This detector, an array of double-sided silicon microstrip detector modules arranged in three concentric cylindrical tiers, is sensitive to the passage of charged particles resulting from antiproton annihilation. This article describes the method used to reconstruct the annihilation location and to distinguish the annihilation signal from the cosmic ray background. Recent experimental results using this detector are outlined.

  16. Selective Sommerfeld Enhancement of p-wave Dark Matter Annihilation

    CERN Document Server

    Das, Anirban

    2016-01-01

    We point out a mechanism for selective Sommerfeld enhancement (suppression) of odd (even) partial waves of dark matter co/annihilation. Using this, the usually velocity-suppressed p-wave annihilation can dominate the annihilation signals in the present Universe. The selection mechanism is a manifestation of an exchange symmetry, and generic for DM with off-diagonal long-range interactions. As a consequence, the relic and late-time annihilation rates are parametrically different and a distinctive phenomenology, with large but strongly velocity-dependent annihilation rates, is predicted.

  17. Estimating tail probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Carr, D.B.; Tolley, H.D.

    1982-12-01

    This paper investigates procedures for univariate nonparametric estimation of tail probabilities. Extrapolated values for tail probabilities beyond the data are also obtained based on the shape of the density in the tail. Several estimators which use exponential weighting are described. These are compared in a Monte Carlo study to nonweighted estimators, to the empirical cdf, to an integrated kernel, to a Fourier series estimate, to a penalized likelihood estimate and a maximum likelihood estimate. Selected weighted estimators are shown to compare favorably to many of these standard estimators for the sampling distributions investigated.

  18. Probability density fittings of corrosion test-data: Implications on C6H15NO3 effectiveness on concrete steel-rebar corrosion

    Indian Academy of Sciences (India)

    Joshua Olusegun Okeniyi; Idemudia Joshua Ambrose; Stanley Okechukwu Okpala; Oluwafemi Michael Omoniyi; Isaac Oluwaseun Oladele; Cleophas Akintoye Loto; Patricia Abimbola Idowu Popoola

    2014-06-01

    In this study, corrosion test-data of steel-rebar in concrete were subjected to the fittings of the Normal, Gumbel and the Weibull probability distribution functions. This was done to investigate the suitability of the results of the fitted test-data, by these distributions, for modelling the effectiveness of C6H15NO3, triethanolamine (TEA), admixtures on the corrosion of steel-rebar in concrete in NaCl and in H2SO4 test-media. For this, six different concentrations of TEA were admixed in replicates of steel-reinforced concrete samples which were immersed in the saline/marine and the microbial/industrial simulating test-environments for seventy-five days. From these, distribution fittings of the non-destructive electrochemical measurements were subjected to the Kolmogorov–Smirnov goodness-of-fit statistics and to the analyses of variance modelling for studying test-data compatibility to the fittings and testing significance. Although all fittings of test-data followed similar trends of significant testing, the fittings of the corrosion rate test data followed the Weibull more than the Normal and the Gumbel distribution fittings, thus supporting use of the Weibull fittings for modelling effectiveness. The effectiveness models on rebar corrosion, based on these, identified 0.083% TEA with optimal inhibition efficiency, $\\eta =$ 72.17 ± 10.68%, in NaCl medium while 0.667% TEA was the only admixture with positive effectiveness, $\\eta =$ 56.45±15.85%, in H2SO4 medium. These results bear implications on the concentrations of TEA for effective corrosion protection of concrete steel-rebar in saline/marine and in industrial/microbial environments.

  19. Neutrino flavor ratios as diagnostic of solar WIMP annihilation

    CERN Document Server

    Lehnert, Ralf

    2007-01-01

    We consider the neutrino (and antineutrino) flavors arriving at Earth for neutrinos produced in the annihilation of weakly interacting massive particles (WIMPs) in the Sun's core. Solar-matter effects on the flavor propagation of the resulting $\\agt$ GeV neutrinos are studied analytically within a density-matrix formalism. Matter effects, including mass-state level-crossings, influence the flavor fluxes considerably. The exposition herein is somewhat pedagogical, in that it starts with adiabatic evolution of single flavors from the Sun's center, with $\\theta_{13}$ set to zero, and progresses to fully realistic processing of the flavor ratios expected in WIMP decay, from the Sun's core to the Earth. In the fully realistic calculation, non-adiabatic level-crossing is included, as are possible nonzero values for $\\theta_{13}$ and the CP-violating phase $\\delta$. Due to resonance enhancement in matter, nonzero values of $\\theta_{13}$ even smaller than a degree can noticeably affect flavor propagation. Both normal...

  20. Study of ion beam induced depolymerization using positron annihilation techniques

    Energy Technology Data Exchange (ETDEWEB)

    Puglisi, O. E-mail: opuglisi@dipchi.unict.it; Fragala, M.E.; Lynn, K.G.; Petkov, M.; Weber, M.; Somoza, A.; Dupasquier, A.; Quasso, F

    2001-04-01

    Ion beam induced depolymerization of polymers is a special class of ion beam induced chemical reaction which gives rise to catastrophic 'unzipping' of macromolecules with production of large amounts of the monomer, of the order of many hundreds monomer molecules per each macromolecule. The possible modification of the density at microscopic level prompted us to undertake a study of this effect utilizing positron annihilation techniques in Poly(methylmethacrylate) (PMMA) before and after bombardment with He{sup +} 300 keV ions at 200 deg. C. Preliminary results shown here indicate that before bombardment there is a reproducible dependence of nano-hole distribution on the sample history. Moreover at 200 deg. C we do not detect formation of new cavities as a consequence of the strong depolymerization that occurs under the ion beam. The possible correlation of these findings with transport properties of PMMA at temperature higher than the glass transition temperature will be discussed.

  1. Collision Probability Analysis

    DEFF Research Database (Denmark)

    Hansen, Peter Friis; Pedersen, Preben Terndrup

    1998-01-01

    It is the purpose of this report to apply a rational model for prediction of ship-ship collision probabilities as function of the ship and the crew characteristics and the navigational environment for MS Dextra sailing on a route between Cadiz and the Canary Islands.The most important ship and crew...... characteristics are: ship speed, ship manoeuvrability, the layout of the navigational bridge, the radar system, the number and the training of navigators, the presence of a look out etc. The main parameters affecting the navigational environment are ship traffic density, probability distributions of wind speeds...... probability, i.e. a study of the navigator's role in resolving critical situations, a causation factor is derived as a second step.The report documents the first step in a probabilistic collision damage analysis. Future work will inlcude calculation of energy released for crushing of structures giving...

  2. Positron annihilation in neutron-irradiated germanium

    Energy Technology Data Exchange (ETDEWEB)

    Bartenev, G.M.; Bardyshev, I.I.; Erchak, D.P.; Stel' makh, V.F.; Tsyganov, A.D.

    1979-04-01

    The annealing of radiation defects in a germanium single crystal irradiated with 10/sup 18/ neutrons/cm/sup 2/ was studied by positron annihilation, ESR, and resistivity measurements. It was found that positrons are trapped by radiation defects. The intensity of the narrow component of the angular correlation of the annihilation radiation yielded the concentration of defect clusters in the irradiated sample n/sub d/approx. =3 x 10/sup 14/ cm/sup -3/. Three characteristic annealing stages were identified. At 160--200 /sup 0/C, point defects were annealed within the crystal. At 200--320 /sup 0/C, there was ''loosening'' of the clusters, and the charge state of the defects changed. At 320--550 /sup 0/C, the clusters were annealed.

  3. Vector dark matter annihilation with internal bremsstrahlung

    Science.gov (United States)

    Bambhaniya, Gulab; Kumar, Jason; Marfatia, Danny; Nayak, Alekha C.; Tomar, Gaurav

    2017-03-01

    We consider scenarios in which the annihilation of self-conjugate spin-1 dark matter to a Standard Model fermion-antifermion final state is chirality suppressed, but where this suppression can be lifted by the emission of an additional photon via internal bremsstrahlung. We find that this scenario can only arise if the initial dark matter state is polarized, which can occur in the context of self-interacting dark matter. In particular, this is possible if the dark matter pair forms a bound state that decays to its ground state before the constituents annihilate. We show that the shape of the resulting photon spectrum is the same as for self-conjugate spin-0 and spin-1/2 dark matter, but the normalization is less heavily suppressed in the limit of heavy mediators.

  4. Vector dark matter annihilation with internal bremsstrahlung

    Directory of Open Access Journals (Sweden)

    Gulab Bambhaniya

    2017-03-01

    Full Text Available We consider scenarios in which the annihilation of self-conjugate spin-1 dark matter to a Standard Model fermion–antifermion final state is chirality suppressed, but where this suppression can be lifted by the emission of an additional photon via internal bremsstrahlung. We find that this scenario can only arise if the initial dark matter state is polarized, which can occur in the context of self-interacting dark matter. In particular, this is possible if the dark matter pair forms a bound state that decays to its ground state before the constituents annihilate. We show that the shape of the resulting photon spectrum is the same as for self-conjugate spin-0 and spin-1/2 dark matter, but the normalization is less heavily suppressed in the limit of heavy mediators.

  5. Vector dark matter annihilation with internal bremsstrahlung

    CERN Document Server

    Bambhaniya, Gulab; Marfatia, Danny; Nayak, Alekha C; Tomar, Gaurav

    2016-01-01

    We consider scenarios in which the annihilation of self-conjugate spin-1 dark matter to a Standard Model fermion-antifermion final state is chirality suppressed, but where this suppression can be lifted by the emission of an additional photon via internal bremsstrahlung. We find that this scenario can only arise if the initial dark matter state is polarized, which can occur in the context of self-interacting dark matter. In particular, this is possible if the dark matter pair forms a bound state that decays to its ground state before the constituents annihilate. We show that the shape of the resulting photon spectrum is the same as for self-conjugate spin-0 and spin-1/2 dark matter, but the normalization is less heavily suppressed in the limit of heavy mediators.

  6. Dark Matter Annihilation Decay at The LHC

    CERN Document Server

    Tsai, Yuhsin; Zhao, Yue

    2015-01-01

    Collider experiments provide an opportunity to shed light on dark matter (DM) self-interactions. In this work, we study the possibility of generating DM bound states -- the Darkonium -- at the LHC and discuss how the annihilation decay of the Darkonium produces force carriers. We focus on two popular scenarios that contain large DM self-couplings: the Higgsinos in the $\\lambda$-SUSY model, and self-interacting DM (SIDM) framework. After forming bound states, the DM particles annihilate into force mediators, which decay into the standard model particles either through a prompt or displaced process. This generates interesting signals for the heavy resonance search. We calculate the production rate of bound states and study the projected future constraints from the existing heavy resonance searches.

  7. Dark matter annihilation near a naked singularity

    CERN Document Server

    Patil, Mandar

    2011-01-01

    We investigate here the dark matter annihilation near a Kerr naked singularity. We show that when dark matter particles collide and annihilate in vicinity of the singularity, the escape fraction to infinity of particles produced is much larger, at least 10^2 - 10^3 times the corresponding black hole values. As high energy collisions are generically possible near a naked singularity, this provides an excellent environment for efficient conversion of dark matter into ordinary standard model particles. If the center of galaxy harbored such a naked singularity, it follows that the observed emergent flux of particles with energy comparable to mass of the dark matter particles is much larger compared to the blackhole case, thus providing an intriguing observational test on the nature of the galactic center

  8. Recent development of positron annihilation methods

    CERN Document Server

    Doyama, M

    2002-01-01

    When positron comes into solid or liquid, it moves in the matter and emitted two gamma rays at the opposite direction, by pair annihilation with electron. Each gamma ray is about 511 keV. The experiments of positron annihilation has been developed by three methods such as angular correlation between two gamma rays, energy analysis of emission gamma ray and positron lifetime. The angular correlation between two gamma rays is determined by gamma ray position detector.The energy analysis was measured by S-W analysis and Coincidence Doppler Broadening (CDB) method. Positron lifetime are determined by gamma-gamma lifetime measurement method, beta sup + -gamma lifetime measurement method and other method using waveform of photomultiplier, and determination of time and frequency of gamma-ray. Positron beam is applied to positron scattering, positron diffraction, low energy positron diffraction (LEPD), PELS, LEPSD, PAES, positron re-emission imaging microscope (PRIM) and positron channeling. The example of CDB method...

  9. Shocking Signals of Dark Matter Annihilation

    CERN Document Server

    Davis, Jonathan H; Boehm, Celine; Kotera, Kumiko; Norman, Colin

    2015-01-01

    We examine whether charged particles injected by self-annihilating Dark Matter into regions undergoing Diffuse Shock Acceleration (DSA) can be accelerated to high energies. We consider three astrophysical sites where shock acceleration is supposed to occur, namely the Galactic Centre, galaxy clusters and Active Galactic Nuclei (AGN). For the Milky Way, we find that the acceleration of cosmic rays injected by dark matter could lead to a bump in the cosmic ray spectrum provided that the product of the efficiency of the acceleration mechanism and the concentration of DM particles is high enough. Among the various acceleration sources that we consider (namely supernova remnants (SNRs), Fermi bubbles and AGN jets), we find that the Fermi bubbles are a potentially more efficient accelerator than SNRs. However both could in principle accelerate electrons and protons injected by dark matter to very high energies. At the extragalactic level, the acceleration of dark matter annihilation products could be responsible fo...

  10. Annihilator Conditions on Formal Power Series

    Institute of Scientific and Technical Information of China (English)

    Gary F. Birkenmeier; Feng-Kuo Huang

    2002-01-01

    The main purpose of this paper is to extend the study of various annihilator conditions on polynomials to formal power series in which addition and substitution are used as operations. This process is not as routine as in the ring case because the substitution of one formal power series into another may not be well defined. Two approaches are introduced to solve this problem. A result of Armendariz on the polynomial extension of a reduced Baer ring is extended to the study of entire functions. It is shown that nearrings of entire functions satisfy certain annihilator conditions. Our results are applied to obtain connections between the multiplicative and substitution structures of various formal power series rings.

  11. Searching for Dark Matter Annihilation in the Smith High-Velocity Cloud

    Science.gov (United States)

    Drlica-Wagner, Alex; Gomez-Vargas, German A.; Hewitt, John W.; Linden, Tim; Tibaldo, Luigi

    2014-01-01

    Recent observations suggest that some high-velocity clouds may be confined by massive dark matter halos. In particular, the proximity and proposed dark matter content of the Smith Cloud make it a tempting target for the indirect detection of dark matter annihilation. We argue that the Smith Cloud may be a better target than some Milky Way dwarf spheroidal satellite galaxies and use gamma-ray observations from the Fermi Large Area Telescope to search for a dark matter annihilation signal. No significant gamma-ray excess is found coincident with the Smith Cloud, and we set strong limits on the dark matter annihilation cross section assuming a spatially extended dark matter profile consistent with dynamical modeling of the Smith Cloud. Notably, these limits exclude the canonical thermal relic cross section (approximately 3 x 10 (sup -26) cubic centimeters per second) for dark matter masses less than or approximately 30 gigaelectronvolts annihilating via the B/B- bar oscillation or tau/antitau channels for certain assumptions of the dark matter density profile; however, uncertainties in the dark matter content of the Smith Cloud may significantly weaken these constraints.

  12. Surfaces of colloidal PbSe nanocrystals probed by thin-film positron annihilation spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Chai, L.; Schut, H.; Schaarenburg, L. C. van; Eijt, S. W. H., E-mail: S.W.H.Eijt@tudelft.nl [Department of Radiation, Radionuclides and Reactors, Faculty of Applied Sciences, Delft University of Technology, Mekelweg 15, NL-2629 JB Delft (Netherlands); Al-Sawai, W.; Barbiellini, B.; Bansil, A. [Physics Department, Northeastern University, Boston, Massachusetts 02115 (United States); Gao, Y. [Department of Chemical Engineering, Faculty of Applied Sciences, Delft University of Technology, Julianalaan 136, NL-2628 BL Delft (Netherlands); Kavli Institute of Nanoscience, Faculty of Applied Sciences, Delft University of Technology, Lorentzweg 1, NL-2628 CJ Delft (Netherlands); Houtepen, A. J. [Department of Chemical Engineering, Faculty of Applied Sciences, Delft University of Technology, Julianalaan 136, NL-2628 BL Delft (Netherlands); Mijnarends, P. E. [Department of Radiation, Radionuclides and Reactors, Faculty of Applied Sciences, Delft University of Technology, Mekelweg 15, NL-2629 JB Delft (Netherlands); Physics Department, Northeastern University, Boston, Massachusetts 02115 (United States); Huis, M. A. van [Kavli Institute of Nanoscience, Faculty of Applied Sciences, Delft University of Technology, Lorentzweg 1, NL-2628 CJ Delft (Netherlands); Ravelli, L.; Egger, W. [Institut für Angewandte Physik und Messtechnik, Universität der Bundeswehr München, Werner-Heisenberg-Weg 39, D-85579 Neubiberg (Germany); Kaprzyk, S. [Physics Department, Northeastern University, Boston, Massachusetts 02115 (United States); Academy of Mining and Metallurgy AGH, PL-30059 Kraków (Poland)

    2013-08-01

    Positron annihilation lifetime spectroscopy and positron-electron momentum density (PEMD) studies on multilayers of PbSe nanocrystals (NCs), supported by transmission electron microscopy, show that positrons are strongly trapped at NC surfaces, where they provide insight into the surface composition and electronic structure of PbSe NCs. Our analysis indicates abundant annihilation of positrons with Se electrons at the NC surfaces and with O electrons of the oleic ligands bound to Pb ad-atoms at the NC surfaces, which demonstrates that positrons can be used as a sensitive probe to investigate the surface physics and chemistry of nanocrystals inside multilayers. Ab initio electronic structure calculations provide detailed insight in the valence and semi-core electron contributions to the positron-electron momentum density of PbSe. Both lifetime and PEMD are found to correlate with changes in the particle morphology characteristic of partial ligand removal.

  13. Surfaces of colloidal PbSe nanocrystals probed by thin-film positron annihilation spectroscopy

    Directory of Open Access Journals (Sweden)

    L. Chai

    2013-08-01

    Full Text Available Positron annihilation lifetime spectroscopy and positron-electron momentum density (PEMD studies on multilayers of PbSe nanocrystals (NCs, supported by transmission electron microscopy, show that positrons are strongly trapped at NC surfaces, where they provide insight into the surface composition and electronic structure of PbSe NCs. Our analysis indicates abundant annihilation of positrons with Se electrons at the NC surfaces and with O electrons of the oleic ligands bound to Pb ad-atoms at the NC surfaces, which demonstrates that positrons can be used as a sensitive probe to investigate the surface physics and chemistry of nanocrystals inside multilayers. Ab initio electronic structure calculations provide detailed insight in the valence and semi-core electron contributions to the positron-electron momentum density of PbSe. Both lifetime and PEMD are found to correlate with changes in the particle morphology characteristic of partial ligand removal.

  14. Landau-Zener Probability Reviewed

    CERN Document Server

    Valencia, C

    2008-01-01

    We examine the survival probability for neutrino propagation through matter with variable density. We present a new method to calculate the level-crossing probability that differs from Landau's method by constant factor, which is relevant in the interpretation of neutrino flux from supernova explosion.

  15. Structure of water + acetonitrile solutions from acoustic and positron annihilation measurements

    Energy Technology Data Exchange (ETDEWEB)

    Jerie, Kazimierz [Institute of Experimental Physics, University of WrocIaw, WrocIaw (Poland); Baranowski, Andrzej [Institute of Experimental Physics, University of WrocIaw, WrocIaw (Poland); Koziol, Stan [Waters Corp., 34 Maple St., Milford, MA 01757 (United States); Glinski, Jacek [Faculty of Chemistry, University of WrocIaw, WrocIaw (Poland)]. E-mail: glin@wchuwr.chem.uni.wroc.pl; Burakowski, Andrzej [Faculty of Chemistry, University of WrocIaw, WrocIaw (Poland)

    2005-03-14

    We report the results of acoustic and positron annihilation measurements in aqueous solutions of acetonitrile (CH{sub 3}CN). Hydrophobicity of the solute is discussed, as well as the possibility of describing the title system in terms of hydrophobic solvation. A new method of calculating the 'ideal' positronium lifetimes is proposed, based on the mean volume of cavities (holes) in liquid structure available for positronium pseudoatom. The results are almost identical with those obtained from molar volumes using the concept of Levay et al. On the other hand, the same calculations performed using the 'bubble' model of annihilation yield very different results. It seems that either acetonitrile forms with water clathrate-like hydrates of untypical architecture, or it is too weak hydrophobic agent to form clathrate-like hydrates at all. The former interpretation seems to be more probable.

  16. Structure of Aqueous Solutions of Acetonitrile Investigated by Acoustic and Positron Annihilation Measurements

    Science.gov (United States)

    Jerie, K.; Baranowski, A.; Koziol, S.; Burakowski, A.

    2005-05-01

    We report the results of acoustic and positron annihilation measurements in aqueous solutions of acetonitrile (CH3CN). Hydrophobicity of the solute is discussed, as well as the possibility of describing the title system in terms of hydrophobic solvation. The concept of Levay et al. of calculating the "ideal positronium lifetimes is applied, basing on the mean volume of cavities (holes) in liquid structure available for positronium pseudoatom. The same calculations performed using the Tao model of annihilation yield very different results. It can be concluded that either acetonitrile forms with water clathrate-like hydrates of untypical architecture, or it is too weak hydrophobic agent to form clathrate-like hydrates at all. The former interpretation seems to be more probable.

  17. Structure of water + acetonitrile solutions from acoustic and positron annihilation measurements

    Science.gov (United States)

    Jerie, Kazimierz; Baranowski, Andrzej; Koziol, Stan; Gliński, Jacek; Burakowski, Andrzej

    2005-03-01

    We report the results of acoustic and positron annihilation measurements in aqueous solutions of acetonitrile (CH 3CN). Hydrophobicity of the solute is discussed, as well as the possibility of describing the title system in terms of hydrophobic solvation. A new method of calculating the "ideal" positronium lifetimes is proposed, based on the mean volume of cavities (holes) in liquid structure available for positronium pseudoatom. The results are almost identical with those obtained from molar volumes using the concept of Levay et al. On the other hand, the same calculations performed using the "bubble" model of annihilation yield very different results. It seems that either acetonitrile forms with water clathrate-like hydrates of untypical architecture, or it is too weak hydrophobic agent to form clathrate-like hydrates at all. The former interpretation seems to be more probable.

  18. LOVO Electrons: The Special Electrons of Molecules in Positron Annihilation Process

    Science.gov (United States)

    Ma, Xiaoguang; Wang, Lizhi; Yang, Chuanlu

    2014-05-01

    The electrons in the lowest occupied valence orbital (LOVO) of molecules have been found to dominate the gamma-ray spectra in the positron-electron annihilation process. The mechanism of this phenomenon is revealed in the present work for the first time. Theoretical quantitative analyses are applied to all noble gas atoms and molecules CH4, O2, C6H6, and C6H14. More than 70% of LOVO electrons and less than 30% of highest occupied molecular orbital (HOMO) electrons distribute within the full width at half-maximum (FWHM) region of the momentum spectra averagely. This indicates that the LOVO electrons have at least 2 times of probabilities than the HOMO electrons within this area. The predicted positron annihilation spectra are then generally dominated by the innermost LOVO electrons instead of the outmost HOMO electrons under the plane-wave approximation.

  19. Weak annihilation cusp inside the dark matter spike about a black hole

    CERN Document Server

    Shapiro, Stuart L

    2016-01-01

    We reinvestigate the effect of annihilations on the distribution of collisionless dark matter (DM) in a spherical density spike around a massive black hole. We first construct a very simple, pedagogic, analytic model for an isotropic phase space distribution function that accounts for annihilation and reproduces the "weak cusp" found by Vasiliev for DM deep within the spike and away from its boundaries. The DM density in the cusp varies as $r^{-1/2}$ for $s$-wave annihilation, where $r$ is the distance from the central black hole, and is not a flat "plateau" profile. We then extend this model by incorporating a loss cone that accounts for the capture of DM particles by the hole. The loss cone is implemented by a boundary condition that removes capture orbits, resulting in an anisotropic distribution function. Finally, we evolve an initial spike distribution function by integrating the Boltzmann equation to show how the weak cusp grows and its density decreases with time. We treat two cases, one for $s$-wave a...

  20. Propensity, Probability, and Quantum Theory

    Science.gov (United States)

    Ballentine, Leslie E.

    2016-08-01

    Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.

  1. Positron annihilation induced Auger and gamma spectroscopies of surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Weiss, A.H. [Physics Department, Box 19059, University of Texas at Arlington, Arlington, TX 76019 (United States)]. E-mail: weiss@uta.edu; Fazleev, N.G. [Physics Department, Box 19059, University of Texas at Arlington, Arlington, TX 76019 (United States); Nadesalingam, M.P. [Physics Department, Box 19059, University of Texas at Arlington, Arlington, TX 76019 (United States); Mukherjee, S. [Physics Department, Box 19059, University of Texas at Arlington, Arlington, TX 76019 (United States); Xie, S. [Physics Department, Box 19059, University of Texas at Arlington, Arlington, TX 76019 (United States); Zhu, J. [Physics Department, Box 19059, University of Texas at Arlington, Arlington, TX 76019 (United States); Davis, B.R. [Physics Department, Box 19059, University of Texas at Arlington, Arlington, TX 76019 (United States)

    2007-02-15

    The annihilation of positrons with core electrons results in an element specific signature in the spectra of Auger-electron and annihilation gamma rays. Because a large fraction of positrons implanted at low energies become trapped just outside the surface, annihilation induced Auger and Gamma signals probe the surfaces of solids with single atomic layer depth resolution. Recent applications of positron annihilation-induced Auger electron spectroscopy (PAES) and Auger-gamma coincidence spectroscopy (AGCS) and future applications of Auger-gamma and gamma-gamma coincidence spectroscopy are discussed.

  2. Positron annihilation induced Auger and gamma spectroscopies of surfaces

    Science.gov (United States)

    Weiss, A. H.; Fazleev, N. G.; Nadesalingam, M. P.; Mukherjee, S.; Xie, S.; Zhu, J.; Davis, B. R.

    2007-02-01

    The annihilation of positrons with core electrons results in an element specific signature in the spectra of Auger-electron and annihilation gamma rays. Because a large fraction of positrons implanted at low energies become trapped just outside the surface, annihilation induced Auger and Gamma signals probe the surfaces of solids with single atomic layer depth resolution. Recent applications of positron annihilation-induced Auger electron spectroscopy (PAES) and Auger-gamma coincidence spectroscopy (AGCS) and future applications of Auger-gamma and gamma-gamma coincidence spectroscopy are discussed.

  3. First-principles calculations of momentum distributions of annihilating electron-positron pairs in defects in UO2

    Science.gov (United States)

    Wiktor, Julia; Jomard, Gérald; Torrent, Marc; Bertolus, Marjorie

    2017-01-01

    We performed first-principles calculations of the momentum distributions of annihilating electron-positron pairs in vacancies in uranium dioxide. Full atomic relaxation effects (due to both electronic and positronic forces) were taken into account and self-consistent two-component density functional theory schemes were used. We present one-dimensional momentum distributions (Doppler-broadened annihilation radiation line shapes) along with line-shape parameters S and W. We studied the effect of the charge state of the defect on the Doppler spectra. The effect of krypton incorporation in the vacancy was also considered and it was shown that it should be possible to observe the fission gas incorporation in defects in UO2 using positron annihilation spectroscopy. We suggest that the Doppler broadening measurements can be especially useful for studying impurities and dopants in UO2 and of mixed actinide oxides.

  4. On baryogenesis from dark matter annihilation

    Energy Technology Data Exchange (ETDEWEB)

    Bernal, Nicolás [ICTP South American Institute for Fundamental Research and Instituto de Física Teórica, Universidade Estadual Paulista, São Paulo, SP 01140-070 (Brazil); Colucci, Stefano; Ubaldi, Lorenzo [Bethe Center for Theoretical Physics and Physikalisches Institut, Universität Bonn, Nußallee 12, D-53115 Bonn (Germany); Josse-Michaux, François-Xavier [Centro de Física Teórica de Partículas CFTP, Instituto Superior Técnico, Technical University of Lisbon, 1049-001 Lisbon (Portugal); Racker, J., E-mail: nicolas@ift.unesp.br, E-mail: colucci@th.physik.uni-bonn.de, E-mail: fxjossemichaux@gmail.com, E-mail: racker@ific.uv.es, E-mail: ubaldi@th.physik.uni-bonn.de [Instituto de Física corpuscular (IFIC), Universidad de Valencia-CSIC Edificio de Institutos de Paterna, Apt. 22085, 46071 Valencia (Spain)

    2013-10-01

    We study in detail the conditions to generate the baryon asymmetry of the universe from the annihilation of dark matter. This scenario requires a low energy mechanism for thermal baryogenesis, hence we first discuss some of these mechanisms together with the specific constraints due to the connection with the dark matter sector. Then we show that, contrary to what stated in previous studies, it is possible to generate the cosmological asymmetry without adding a light sterile dark sector, both in models with violation and with conservation of B−L. In addition, one of the models we propose yields some connection to neutrino masses.

  5. Vector dark matter annihilation with internal bremsstrahlung

    OpenAIRE

    Bambhaniya, Gulab; Kumar, Jason; Marfatia, Danny; Nayak, Alekha C.; Tomar, Gaurav

    2016-01-01

    We consider scenarios in which the annihilation of self-conjugate spin-1 dark matter to a Standard Model fermion-antifermion final state is chirality suppressed, but where this suppression can be lifted by the emission of an additional photon via internal bremsstrahlung. We find that this scenario can only arise if the initial dark matter state is polarized, which can occur in the context of self-interacting dark matter. In particular, this is possible if the dark matter pair forms a bound st...

  6. Particle-antiparticle asymmetries from annihilations

    CERN Document Server

    Baldes, Iason; Petraki, Kalliopi; Volkas, Raymond R

    2014-01-01

    An extensively studied mechanism to create particle-antiparticle asymmetries is the out-of-equilibrium and CP violating decay of a heavy particle. Here we instead examine how asymmetries can arise purely from 2 2 annihilations rather than from the usual 1 2 decays and inverse decays. We review the general conditions on the reaction rates that arise from S-matrix unitarity and CPT invariance, and show how these are implemented in the context of a simple toy model. We formulate the Boltzmann equations for this model, and present an example solution.

  7. Research of Dynamic Depreciation of Medical Equipment Based on x2 Distribution Probability Density Function%基于x2分布概率密度函数的医疗设备折旧法的研究

    Institute of Scientific and Technical Information of China (English)

    邓厚斌; 葛毅; 范璐敏; 刘晓雯; 李盈盈

    2012-01-01

    In order to proceed depreciation accounting of medical equipment reasonably, this paper analyses and compares the advantages and disadvantages of several common depreciation methods, with use efficiency of medical equipment, proposes distribution rule of static depreciation rate of x2 distribution probability density function, meanwhile, introduces benchmark benefit ratio of funds, establishes dynamic depreciation method of medical equipment.%为能够较合理地进行医疗设备的折旧核算,本文分析比较了常见的几种折旧方法的优缺点,结合医疗设备的使用效能,提出了医疗设备拟合x 2分布概率密度函数的静态折旧率分布规律,建立新的医疗设备折旧法.

  8. Search for dark matter annihilation in the Galactic Center with IceCube-79

    Science.gov (United States)

    Aartsen, M. G.; Abraham, K.; Ackermann, M.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Ahrens, M.; Altmann, D.; Anderson, T.; Archinger, M.; Arguelles, C.; Arlen, T. C.; Auffenberg, J.; Bai, X.; Barwick, S. W.; Baum, V.; Bay, R.; Beatty, J. J.; Becker Tjus, J.; Becker, K.-H.; Beiser, E.; BenZvi, S.; Berghaus, P.; Berley, D.; Bernardini, E.; Bernhard, A.; Besson, D. Z.; Binder, G.; Bindig, D.; Bissok, M.; Blaufuss, E.; Blumenthal, J.; Boersma, D. J.; Bohm, C.; Börner, M.; Bos, F.; Bose, D.; Böser, S.; Botner, O.; Braun, J.; Brayeur, L.; Bretz, H.-P.; Brown, A. M.; Buzinsky, N.; Casey, J.; Casier, M.; Cheung, E.; Chirkin, D.; Christov, A.; Christy, B.; Clark, K.; Classen, L.; Coenders, S.; Cowen, D. F.; Cruz Silva, A. H.; Daughhetee, J.; Davis, J. C.; Day, M.; de André, J. P. A. M.; De Clercq, C.; Dembinski, H.; De Ridder, S.; Desiati, P.; de Vries, K. D.; de Wasseige, G.; de With, M.; DeYoung, T.; Díaz-Vélez, J. C.; Dumm, J. P.; Dunkman, M.; Eagan, R.; Eberhardt, B.; Ehrhardt, T.; Eichmann, B.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fahey, S.; Fazely, A. R.; Fedynitch, A.; Feintzeig, J.; Felde, J.; Filimonov, K.; Finley, C.; Fischer-Wasels, T.; Flis, S.; Fuchs, T.; Glagla, M.; Gaisser, T. K.; Gaior, R.; Gallagher, J.; Gerhardt, L.; Ghorbani, K.; Gier, D.; Gladstone, L.; Glüsenkamp, T.; Goldschmidt, A.; Golup, G.; Gonzalez, J. G.; Góra, D.; Grant, D.; Gretskov, P.; Groh, J. C.; Groß, A.; Ha, C.; Haack, C.; Haj Ismail, A.; Hallgren, A.; Halzen, F.; Hansmann, B.; Hanson, K.; Hebecker, D.; Heereman, D.; Helbing, K.; Hellauer, R.; Hellwig, D.; Hickford, S.; Hignight, J.; Hill, G. C.; Hoffman, K. D.; Hoffmann, R.; Holzapfel, K.; Homeier, A.; Hoshina, K.; Huang, F.; Huber, M.; Huelsnitz, W.; Hulth, P. O.; Hultqvist, K.; In, S.; Ishihara, A.; Jacobi, E.; Japaridze, G. S.; Jero, K.; Jurkovic, M.; Kaminsky, B.; Kappes, A.; Karg, T.; Karle, A.; Kauer, M.; Keivani, A.; Kelley, J. L.; Kemp, J.; Kheirandish, A.; Kiryluk, J.; Kläs, J.; Klein, S. R.; Kohnen, G.; Kolanoski, H.; Konietz, R.; Koob, A.; Köpke, L.; Kopper, C.; Kopper, S.; Koskinen, D. J.; Kowalski, M.; Krings, K.; Kroll, G.; Kroll, M.; Kunnen, J.; Kurahashi, N.; Kuwabara, T.; Labare, M.; Lanfranchi, J. L.; Larson, M. J.; Lesiak-Bzdak, M.; Leuermann, M.; Leuner, J.; Lünemann, J.; Madsen, J.; Maggi, G.; Mahn, K. B. M.; Maruyama, R.; Mase, K.; Matis, H. S.; Maunu, R.; McNally, F.; Meagher, K.; Medici, M.; Meli, A.; Menne, T.; Merino, G.; Meures, T.; Miarecki, S.; Middell, E.; Middlemas, E.; Miller, J.; Mohrmann, L.; Montaruli, T.; Morse, R.; Nahnhauer, R.; Naumann, U.; Niederhausen, H.; Nowicki, S. C.; Nygren, D. R.; Obertacke, A.; Olivas, A.; Omairat, A.; O'Murchadha, A.; Palczewski, T.; Paul, L.; Pepper, J. A.; Pérez de los Heros, C.; Pfendner, C.; Pieloth, D.; Pinat, E.; Posselt, J.; Price, P. B.; Przybylski, G. T.; Pütz, J.; Quinnan, M.; Rädel, L.; Rameez, M.; Rawlins, K.; Redl, P.; Reimann, R.; Relich, M.; Resconi, E.; Rhode, W.; Richman, M.; Richter, S.; Riedel, B.; Robertson, S.; Rongen, M.; Rott, C.; Ruhe, T.; Ruzybayev, B.; Ryckbosch, D.; Saba, S. M.; Sabbatini, L.; Sander, H.-G.; Sandrock, A.; Sandroos, J.; Sarkar, S.; Schatto, K.; Scheriau, F.; Schimp, M.; Schmidt, T.; Schmitz, M.; Schoenen, S.; Schöneberg, S.; Schönwald, A.; Schukraft, A.; Schulte, L.; Seckel, D.; Seunarine, S.; Shanidze, R.; Smith, M. W. E.; Soldin, D.; Spiczak, G. M.; Spiering, C.; Stahlberg, M.; Stamatikos, M.; Stanev, T.; Stanisha, N. A.; Stasik, A.; Stezelberger, T.; Stokstad, R. G.; Stößl, A.; Strahler, E. A.; Ström, R.; Strotjohann, N. L.; Sullivan, G. W.; Sutherland, M.; Taavola, H.; Taboada, I.; Ter-Antonyan, S.; Terliuk, A.; Tešić, G.; Tilav, S.; Toale, P. A.; Tobin, M. N.; Tosi, D.; Tselengidou, M.; Unger, E.; Usner, M.; Vallecorsa, S.; van Eijndhoven, N.; Vandenbroucke, J.; van Santen, J.; Vanheule, S.; Veenkamp, J.; Vehring, M.; Voge, M.; Vraeghe, M.; Walck, C.; Wallraff, M.; Wandkowsky, N.; Weaver, Ch.; Wendt, C.; Westerhoff, S.; Whelan, B. J.; Whitehorn, N.; Wichary, C.; Wiebe, K.; Wiebusch, C. H.; Wille, L.; Williams, D. R.; Wissing, H.; Wolf, M.; Wood, T. R.; Woschnagg, K.; Xu, D. L.; Xu, X. W.; Xu, Y.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Zarzhitsky, P.; Zoll, M.

    2015-10-01

    The Milky Way is expected to be embedded in a halo of dark matter particles, with the highest density in the central region, and decreasing density with the halo-centric radius. Dark matter might be indirectly detectable at Earth through a flux of stable particles generated in dark matter annihilations and peaked in the direction of the Galactic Center. We present a search for an excess flux of muon (anti-) neutrinos from dark matter annihilation in the Galactic Center using the cubic-kilometer-sized IceCube neutrino detector at the South Pole. There, the Galactic Center is always seen above the horizon. Thus, new and dedicated veto techniques against atmospheric muons are required to make the southern hemisphere accessible for IceCube. We used 319.7 live-days of data from IceCube operating in its 79-string configuration during 2010 and 2011. No neutrino excess was found and the final result is compatible with the background. We present upper limits on the self-annihilation cross-section, , for WIMP masses ranging from 30 GeV up to 10 TeV, assuming cuspy (NFW) and flat-cored (Burkert) dark matter halo profiles, reaching down to ˜eq 4 \\cdot 10^{-24} cm^3 s^{-1}, and ˜eq 2.6 \\cdot 10^{-23} cm^3 s^{-1} for the ν overline{ν } channel, respectively.

  9. Search for dark matter annihilation in the Galactic Center with IceCube-79

    Energy Technology Data Exchange (ETDEWEB)

    Aartsen, M.G.; Hill, G.C.; Robertson, S.; Whelan, B.J. [University of Adelaide, School of Chemistry and Physics, Adelaide, SA (Australia); Abraham, K.; Bernhard, A.; Coenders, S.; Gross, A.; Holzapfel, K.; Huber, M.; Jurkovic, M.; Krings, K.; Resconi, E.; Veenkamp, J. [Technische Universitaet Muenchen, Garching (Germany); Ackermann, M.; Berghaus, P.; Bernardini, E.; Bretz, H.P.; Cruz Silva, A.H.; Gluesenkamp, T.; Gora, D.; Jacobi, E.; Kaminsky, B.; Karg, T.; Middell, E.; Mohrmann, L.; Nahnhauer, R.; Schoenwald, A.; Shanidze, R.; Spiering, C.; Stasik, A.; Stoessl, A.; Strotjohann, N.L.; Terliuk, A.; Usner, M.; Yanez, J.P. [DESY, Zeuthen (Germany); Adams, J.; Brown, A.M. [University of Canterbury, Department of Physics and Astronomy, Private Bag 4800, Christchurch (New Zealand); Aguilar, J.A.; Heereman, D.; Meagher, K.; Meures, T.; O' Murchadha, A.; Pinat, E. [Universite Libre de Bruxelles, Science Faculty CP230, Brussels (Belgium); Ahlers, M.; Arguelles, C.; Beiser, E.; BenZvi, S.; Braun, J.; Chirkin, D.; Day, M.; Desiati, P.; Diaz-Velez, J.C.; Fadiran, O.; Fahey, S.; Feintzeig, J.; Ghorbani, K.; Gladstone, L.; Halzen, F.; Hanson, K.; Hoshina, K.; Jero, K.; Karle, A.; Kelley, J.L.; Kheirandish, A.; McNally, F.; Merino, G.; Middlemas, E.; Morse, R.; Richter, S.; Sabbatini, L.; Tobin, M.N.; Tosi, D.; Vandenbroucke, J.; Van Santen, J.; Wandkowsky, N.; Weaver, C.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wille, L. [Wisconsin IceCube Particle Astrophysics Center, University of Wisconsin, Department of Physics, Madison, WI (United States); Ahrens, M.; Bohm, C.; Dumm, J.P.; Finley, C.; Flis, S.; Hulth, P.O.; Hultqvist, K.; Walck, C.; Wolf, M.; Zoll, M. [Oskar Klein Centre, Stockholm University, Department of Physics, Stockholm (Sweden); Altmann, D.; Classen, L.; Kappes, A.; Tselengidou, M. [Friedrich-Alexander-Universitaet Erlangen-Nuernberg, Erlangen Centre for Astroparticle Physics, Erlangen (Germany); Anderson, T.; Arlen, T.C.; Dunkman, M.; Eagan, R.; Groh, J.C.; Huang, F.; Keivani, A.; Lanfranchi, J.L.; Quinnan, M.; Smith, M.W.E.; Stanisha, N.A.; Tesic, G. [Pennsylvania State University, Department of Physics, University Park, PA (United States); Archinger, M.; Baum, V.; Boeser, S.; Eberhardt, B.; Ehrhardt, T.; Koepke, L.; Kroll, G.; Luenemann, J.; Sander, H.G.; Schatto, K.; Wiebe, K. [University of Mainz, Institute of Physics, Mainz (Germany); Auffenberg, J.; Bissok, M.; Blumenthal, J.; Glagla, M.; Gier, D.; Gretskov, P.; Haack, C.; Hansmann, B.; Hellwig, D.; Kemp, J.; Konietz, R.; Koob, A.; Leuermann, M.; Leuner, J.; Paul, L.; Puetz, J.; Raedel, L.; Reimann, R.; Rongen, M.; Schimp, M.; Schoenen, S.; Schukraft, A.; Stahlberg, M.; Vehring, M.; Wallraff, M.; Wichary, C.; Wiebusch, C.H. [RWTH Aachen University, III. Physikalisches Institut, Aachen (Germany); Bai, X. [South Dakota School of Mines and Technology, Physics Department, Rapid City, SD (United States); Barwick, S.W.; Yodh, G. [University of California, Department of Physics and Astronomy, Irvine, CA (United States); Bay, R.; Filimonov, K.; Price, P.B.; Woschnagg, K. [University of California, Department of Physics, Berkeley, CA (United States); Beatty, J.J. [Ohio State University, Department of Physics and Center for Cosmology and Astro-Particle Physics, Columbus, OH (United States); Ohio State University, Department of Astronomy, Columbus, OH (United States); Becker Tjus, J.; Bos, F.; Eichmann, B.; Fedynitch, A.; Kroll, M.; Saba, S.M.; Schoeneberg, S. [Ruhr-Universitaet Bochum, Fakultaet fuer Physik and Astronomie, Bochum (Germany); Becker, K.H.; Bindig, D.; Fischer-Wasels, T.; Helbing, K.; Hickford, S.; Hoffmann, R.; Klaes, J.; Kopper, S.; Naumann, U.; Obertacke, A.; Omairat, A.; Posselt, J.; Soldin, D. [University of Wuppertal, Department of Physics, Wuppertal (Germany); Berley, D.; Blaufuss, E.; Cheung, E.; Christy, B.; Felde, J.; Hellauer, R.; Hoffman, K.D.; Huelsnitz, W.; Maunu, R.; Olivas, A.; Redl, P.; Schmidt, T.; Sullivan, G.W.; Wissing, H. [University of Maryland, Department of Physics, College Park, MD (United States); Besson, D.Z. [University of Kansas, Department of Physics and Astronomy, Lawrence, KS (United States); Binder, G.; Gerhardt, L.; Ha, C.; Klein, S.R.; Miarecki, S. [University of California, Department of Physics, Berkeley, CA (United States); Lawrence Berkeley National Laboratory, Berkeley, CA (United States); Boersma, D.J.; Botner, O.; Euler, S.; Hallgren, A.; Collaboration: IceCube Collaboration; and others

    2015-10-15

    The Milky Way is expected to be embedded in a halo of dark matter particles, with the highest density in the central region, and decreasing density with the halo-centric radius. Dark matter might be indirectly detectable at Earth through a flux of stable particles generated in dark matter annihilations and peaked in the direction of the Galactic Center. We present a search for an excess flux of muon (anti-) neutrinos from dark matter annihilation in the Galactic Center using the cubic-kilometer-sized IceCube neutrino detector at the South Pole. There, the Galactic Center is always seen above the horizon. Thus, new and dedicated veto techniques against atmospheric muons are required to make the southern hemisphere accessible for IceCube. We used 319.7 live-days of data from IceCube operating in its 79-string configuration during 2010 and 2011. No neutrino excess was found and the final result is compatible with the background. We present upper limits on the self-annihilation cross-section, left angle σ{sub A} right angle, for WIMP masses ranging from 30 GeV up to 10 TeV, assuming cuspy (NFW) and flat-cored (Burkert) dark matter halo profiles, reaching down to ≅ 4 . 10{sup -24} cm{sup 3}s{sup -1}, and ≅ 2.6 . 10{sup -23} cm{sup 3}s{sup -1} for the νanti ν channel, respectively. (orig.)

  10. Search for Dark Matter Annihilation in the Galactic Center with IceCube-79

    CERN Document Server

    Aartsen, M G; Ackermann, M; Adams, J; Aguilar, J A; Ahlers, M; Ahrens, M; Altmann, D; Anderson, T; Archinger, M; Arguelles, C; Arlen, T C; Auffenberg, J; Bai, X; Barwick, S W; Baum, V; Bay, R; Beatty, J J; Tjus, J Becker; Becker, K -H; Beiser, E; BenZvi, S; Berghaus, P; Berley, D; Bernardini, E; Bernhard, A; Besson, D Z; Binder, G; Bindig, D; Bissok, M; Blaufuss, E; Blumenthal, J; Boersma, D J; Bohm, C; Börner, M; Bos, F; Bose, D; Böser, S; Botner, O; Braun, J; Brayeur, L; Bretz, H -P; Brown, A M; Buzinsky, N; Casey, J; Casier, M; Cheung, E; Chirkin, D; Christov, A; Christy, B; Clark, K; Classen, L; Coenders, S; Cowen, D F; Silva, A H Cruz; Daughhetee, J; Davis, J C; Day, M; de André, J P A M; De Clercq, C; Dembinski, H; De Ridder, S; Desiati, P; de Vries, K D; de Wasseige, G; de With, M; DeYoung, T; Díaz-Vélez, J C; Dumm, J P; Dunkman, M; Eagan, R; Eberhardt, B; Ehrhardt, T; Eichmann, B; Euler, S; Evenson, P A; Fadiran, O; Fahey, S; Fazely, A R; Fedynitch, A; Feintzeig, J; Felde, J; Filimonov, K; Finley, C; Fischer-Wasels, T; Flis, S; Fuchs, T; Glagla, M; Gaisser, T K; Gaior, R; Gallagher, J; Gerhardt, L; Ghorbani, K; Gier, D; Gladstone, L; Glüsenkamp, T; Goldschmidt, A; Golup, G; Gonzalez, J G; Góra, D; Grant, D; Gretskov, P; Groh, J C; Groß, A; Ha, C; Haack, C; Ismail, A Haj; Hallgren, A; Halzen, F; Hansmann, B; Hanson, K; Hebecker, D; Heereman, D; Helbing, K; Hellauer, R; Hellwig, D; Hickford, S; Hignight, J; Hill, G C; Hoffman, K D; Hoffmann, R; Holzapfe, K; Homeier, A; Hoshina, K; Huang, F; Huber, M; Huelsnitz, W; Hulth, P O; Hultqvist, K; In, S; Ishihara, A; Jacobi, E; Japaridze, G S; Jero, K; Jurkovic, M; Kaminsky, B; Kappes, A; Karg, T; Karle, A; Kauer, M; Keivani, A; Kelley, J L; Kemp, J; Kheirandish, A; Kiryluk, J; Kläs, J; Klein, S R; Kohnen, G; Koirala, R; Kolanoski, H; Konietz, R; Koob, A; Köpke, L; Kopper, C; Kopper, S; Koskinen, D J; Kowalski, M; Krings, K; Kroll, G; Kroll, M; Kunnen, J; Kurahashi, N; Kuwabara, T; Labare, M; Lanfranchi, J L; Larson, M J; Lesiak-Bzdak, M; Leuermann, M; Leuner, J; Lünemann, J; Madsen, J; Maggi, G; Mahn, K B M; Maruyama, R; Mase, K; Matis, H S; Maunu, R; McNally, F; Meagher, K; Medici, M; Meli, A; Menne, T; Merino, G; Meures, T; Miarecki, S; Middell, E; Middlemas, E; Miller, J; Mohrmann, L; Montaruli, T; Morse, R; Nahnhauer, R; Naumann, U; Niederhausen, H; Nowicki, S C; Nygren, D R; Obertacke, A; Olivas, A; Omairat, A; O'Murchadha, A; Palczewski, T; Pandya, H; Paul, L; Pepper, J A; Heros, C Pérez de los; Pfendner, C; Pieloth, D; Pinat, E; Posselt, J; Price, P B; Przybylski, G T; Pütz, J; Quinnan, M; Rädel, L; Rameez, M; Rawlins, K; Redl, P; Reimann, R; Relich, M; Resconi, E; Rhode, W; Richman, M; Richter, S; Riedel, B; Robertson, S; Rongen, M; Rott, C; Ruhe, T; Ruzybayev, B; Ryckbosch, D; Saba, S M; Sabbatini, L; Sander, H -G; Sandrock, A; Sandroos, J; Sarkar, S; Schatto, K; Scheriau, F; Schimp, M; Schmidt, T; Schmitz, M; Schoenen, S; Schöneberg, S; Schönwald, A; Schukraft, A; Schulte, L; Seckel, D; Seunarine, S; Shanidze, R; Smith, M W E; Soldin, D; Spiczak, G M; Spiering, C; Stahlberg, M; Stamatikos, M; Stanev, T; Stanisha, N A; Stasik, A; Stezelberger, T; Stokstad, R G; Stößl, A; Strahler, E A; Ström, R; Strotjohann, N L; Sullivan, G W; Sutherland, M; Taavola, H; Taboada, I; Ter-Antonyan, S; Terliuk, A; Tešić, G; Tilav, S; Toale, P A; Tobin, M N; Tosi, D; Tselengidou, M; Unger, E; Usner, M; Vallecorsa, S; van Eijndhoven, N; Vandenbroucke, J; van Santen, J; Vanheule, S; Veenkamp, J; Vehring, M; Voge, M; Vraeghe, M; Walck, C; Wallraff, M; Wandkowsky, N; Weaver, Ch; Wendt, C; Westerhoff, S; Whelan, B J; Whitehorn, N; Wichary, C; Wiebe, K; Wiebusch, C H; Wille, L; Williams, D R; Wissing, H; Wolf, M; Wood, T R; Woschnagg, K; Xu, D L; Xu, X W; Xu, Y; Yanez, J P; Yodh, G; Yoshida, S; Zarzhitsky, P; Zoll, M

    2015-01-01

    The Milky Way is expected to be embedded in a halo of dark matter particles, with the highest density in the central region, and decreasing density with the halo-centric radius. Dark matter might be indirectly detectable at Earth through a flux of stable particles generated in dark matter annihilations and peaked in the direction of the Galactic Center. We present a search for an excess flux of muon (anti-) neutrinos from dark matter annihilation in the Galactic Center using the cubic-kilometer-sized IceCube neutrino detector at the South Pole. There, the Galactic Center is always seen above the horizon. Thus, new and dedicated veto techniques against atmospheric muons are required to make the southern hemisphere accessible for IceCube. We used 319.7 live-days of data from IceCube operating in its 79-string configuration during 2010 and 2011. No neutrino excess was found and the final result is compatible with the background. We present upper limits on the self-annihilation cross-section, $\\left$, for WIMP ma...

  11. One-loop corrections to gaugino (co)annihilation into quarks in the MSSM

    Science.gov (United States)

    Herrmann, B.; Klasen, M.; Kovařík, K.; Meinecke, M.; Steppeler, P.

    2014-06-01

    We present the full O(αs) supersymmetric QCD corrections for gaugino annihilation and coannihilation into light and heavy quarks in the minimal supersymmetric standard model (MSSM). We demonstrate that these channels are phenomenologically relevant within the so-called phenomenological MSSM. We discuss selected technical details such as the dipole subtraction method in the case of light quarks and the treatment of the bottom quark mass and Yukawa coupling. Numerical results for the (co)annihilation cross sections and the predicted neutralino relic density are presented. We show that the impact of including the radiative corrections on the cosmologically preferred region of the parameter space is larger than the current experimental uncertainty from Planck data.

  12. The surface brightness of dark matter unique signatures of neutralino annihilation in the Galactic halo

    CERN Document Server

    Calcaneo-Roldan, C; Calcaneo-Roldan, Carlos; Moore, Ben

    2000-01-01

    We use high resolution numerical simulations of the formation of cold dark matter halos to simulate the background of decay products from neutralino annihilation, such as gamma-rays or neutrinos. Halos are non-spherical, have steep singular density profiles and contain many thousands of surviving dark matter substructure clumps. This leads to several unique signatures in the gamma-ray background that may be confirmed or rejected by the next generation of gamma-ray experiments. Most importantly, the diffuse background is enhanced by over two orders of magnitude due to annihilation within substructure halos. The largest dark substructures are easily visibly above the background and may account for the unidentified EGRET sources. A deep strip survey of the gamma-ray background would allow the shape of the Galactic halo to be quantified.

  13. Branching ratios for pbarp annihilation at rest into two-body final states

    CERN Document Server

    Abele, A; Amsler, Claude; Baker, C A; Barnett, B M; Batty, C J; Benayoun, M; Bischoff, S; Blüm, P; Braune, K; Bugg, D V; Case, T; Crowe, K M; Degener, T; Doser, Michael; Dünnweber, W; Engelhardt, D; Faessler, M A; Giarritta, P; Haddock, R P; Heinsius, F H; Heinzelmann, M; Herbstrith, A; Herz, M; Hessey, N P; Hidas, P; Hodd, C; Holtzhaussen, C; Jamnik, D; Kalinowsky, H; Kammel, P; Kisiel, J; Klempt, E; Koch, H; Kunze, M; Kurilla, U; Lakata, M; Landua, Rolf; Matthäy, H; McCrady, R; Meier, J; Meyer, C A; Montanet, Lucien; Ouared, R; Peters, K; Pick, B; Ratajczak, M; Regenfus, C; Röthel, W; Spanier, S; Stöck, H; Strassburger, C; Strohbusch, U; Suffert, Martin; Suh, J S; Thoma, U; Tischhäuser, M; Uman, I; Völcker, C; Wallis-Plachner, S; Walther, D; Wiedner, U; Wittmack, K; Zou, B S

    2001-01-01

    Measurements of two-body branching ratios for pbarp annihilations at rest in liquid and gaseous (12 rho sub S sub T sub P) hydrogen are reported. Channels studied are pbarp-> pi sup 0 pi sup 0 ,pi sup 0 eta, K sup 0 sub S K sup 0 sub L , K sup + K sup -. The branching ratio for the pi sup 0 pi sup 0 channel in liquid H sub 2 is measured to be (6.14+-0.40)x10 sup - sup 4. The results are compared with those from other experiments. The fraction of P-state annihilation for a range of target densities from 0.002 rho sub S sub T sub P to liquid H sub 2 is determined. Values obtained include 0.11+-0.02 in liquid H sub 2 and 0.48+-0.04 in 12 rho sub S sub T sub P H sub 2 gas.

  14. Probability of Failure in Random Vibration

    DEFF Research Database (Denmark)

    Nielsen, Søren R.K.; Sørensen, John Dalsgaard

    1988-01-01

    Close approximations to the first-passage probability of failure in random vibration can be obtained by integral equation methods. A simple relation exists between the first-passage probability density function and the distribution function for the time interval spent below a barrier before out......-crossing. An integral equation for the probability density function of the time interval is formulated, and adequate approximations for the kernel are suggested. The kernel approximation results in approximate solutions for the probability density function of the time interval and thus for the first-passage probability...

  15. Impact of dark matter decays and annihilations on structure formation

    NARCIS (Netherlands)

    Mapelli, M.; Ripamonti, E.

    2007-01-01

    Abstract: We derived the evolution of the energy deposition in the intergalactic medium (IGM) by different decaying (or annihilating) dark matter (DM) candidates. Heavy annihilating DM particles (with mass larger than a few GeV) have no influence on reionization and heating, even if we assume that a

  16. Three-photon annihilation of the electron-positron pairs

    OpenAIRE

    Frolov, A. M.

    2008-01-01

    Three-photon annihilation of the electron-positron pairs (= $(e^{-}, e^{+})-$pairs) is considered in the electron rest frame. The energy of the incident positron can be arbitrary. The analytical expression for the cross-section of three-photon annihilation of the $(e^{-},e^{+})-$pair has been derived and investigated.

  17. CMB constraint on dark matter annihilation after Planck 2015

    Directory of Open Access Journals (Sweden)

    Masahiro Kawasaki

    2016-05-01

    Full Text Available We update the constraint on the dark matter annihilation cross section by using the recent measurements of the CMB anisotropy by the Planck satellite. We fully calculate the cascade of dark matter annihilation products and their effects on ionization, heating and excitation of the hydrogen, hence do not rely on any assumption on the energy fractions that cause these effects.

  18. A STUDY OF FINE PRECIPITATES IN ALLOYS BY POSITRON ANNIHILATION

    Institute of Scientific and Technical Information of China (English)

    王景成; 尤富强; 殷俊林; 高国华; 梁玲; 段勇

    2001-01-01

    Measurements were performed using the positron annihilation technique associated with physical metallurgical techniques for several engineering alloys containing fine precipitates. It is shown that positron annihilation is an effective method to detect fine precipitates, providing a sound basis for a further intense research of these.

  19. Photoinduced carrier annihilation in silicon pn junction

    Science.gov (United States)

    Sameshima, Toshiyuki; Motoki, Takayuki; Yasuda, Keisuke; Nakamura, Tomohiko; Hasumi, Masahiko; Mizuno, Toshihisa

    2015-08-01

    We report analysis of the photo-induced minority carrier effective lifetime (τeff) in a p+n junction formed on the top surfaces of a n-type silicon substrate by ion implantation of boron and phosphorus atoms at the top and bottom surfaces followed by activation by microwave heating. Bias voltages were applied to the p+ boron-doped surface with n+ phosphorus-doped surface kept at 0 V. The values of τeff were lower than 1 × 10-5 s under the reverse-bias condition. On the other hand, τeff markedly increased to 1.4 × 10-4 s as the forward-bias voltage increased to 0.7 V and then it leveled off when continuous-wave 635 nm light was illuminated at 0.74 mW/cm2 on the p+ surface. The carrier annihilation velocity S\\text{p + } at the p+ surface region was numerically estimated from the experimental τeff. S\\text{p + } ranged from 4000 to 7200 cm/s under the reverse-bias condition when the carrier annihilation velocity S\\text{n + } at the n+ surface region was assumed to be a constant value of 100 cm/s. S\\text{p + } markedly decreased to 265 cm/s as the forward-bias voltage increased to 0.7 V.

  20. Electroweak bremsstrahlung for wino-like Dark Matter annihilations

    CERN Document Server

    Ciafaloni, Paolo; De Simone, Andrea; Riotto, Antonio; Urbano, Alfredo

    2012-01-01

    If the Dark Matter is the neutral Majorana component of a multiplet which is charged under the electroweak interactions of the Standard Model, its main annihilation channel is into W+W-, while the annihilation into light fermions is helicity suppressed. As pointed out recently, the radiation of gauge bosons from the initial state of the annihilation lifts the suppression and opens up an s-wave contribution to the cross section. We perform the full tree-level calculation of Dark Matter annihilations, including electroweak bremsstrahlung, in the context of an explicit model corresponding to the supersymmetric wino. We find that the fermion channel can become as important as the di-boson one. This result has significant implications for the predictions of the fluxes of particles originating from Dark Matter annihilations.

  1. Searching for Dark Matter Annihilation in M87

    CERN Document Server

    Saxena, Sheetal; Rüger, Michael; Summa, Alexander; Mannheim, Karl

    2011-01-01

    Clusters of galaxies, such as the Virgo cluster, host enormous quantities of dark matter, making them prime targets for efforts in indirect dark matter detection via potential radiative signatures from annihilation of dark matter particles and subsequent radiative losses of annihilation products. However, a careful study of ubiquitous astrophysical backgrounds is mandatory to single out potential evidence for dark matter annihilation. Here, we construct a multiwavelength spectral energy distribution for the central radio galaxy in the Virgo cluster, M87, using a state-of-the-art numerical Synchrotron Self Compton approach. Fitting recent Chandra, Fermi-LAT and Cherenkov observations, we probe different dark matter annihilation scenarios including a full treatment of the inverse Compton losses from electrons and positrons produced in the annihilation. It is shown that such a template can substantially improve upon existing dark matter detection limits.

  2. Constraints on dark matter annihilations from diffuse gamma-ray emission in the Galaxy

    Energy Technology Data Exchange (ETDEWEB)

    Tavakoli, Maryam; Evoli, Carmelo [II. Institut für Theoretische Physik, Universität Hamburg, Luruper Chaussee 149, 22761 Hamburg (Germany); Cholis, Ilias [Fermi National Accelerator Laboratory, Center for Particle Astrophysics, Batavia, IL 60510 (United States); Ullio, Piero, E-mail: maryam.tavakoli@desy.de, E-mail: cholis@fnal.gov, E-mail: carmelo.evoli@desy.de, E-mail: ullio@sissa.it [SISSA, Via Bonomea 265, 34136 Trieste (Italy)

    2014-01-01

    Recent advances in γ-ray cosmic ray, infrared and radio astronomy have allowed us to develop a significantly better understanding of the galactic medium properties in the last few years. In this work using the DRAGON code, that numerically solves the CR propagation equation and calculating γ-ray emissivities in a 2-dimensional grid enclosing the Galaxy, we study in a self consistent manner models for the galactic diffuse γ-ray emission. Our models are cross-checked to both the available CR and γ-ray data. We address the extend to which dark matter annihilations in the Galaxy can contribute to the diffuse γ-ray flux towards different directions on the sky. Moreover we discuss the impact that astrophysical uncertainties of non DM nature, have on the derived γ-ray limits. Such uncertainties are related to the diffusion properties on the Galaxy, the interstellar gas and the interstellar radiation field energy densities. Light ∼ 10 GeV dark matter annihilating dominantly to hadrons is more strongly constrained by γ-ray observations towards the inner parts of the Galaxy and influenced the most by assumptions of the gas distribution; while TeV scale DM annihilating dominantly to leptons has its tightest constraints from observations towards the galactic center avoiding the galactic disk plane, with the main astrophysical uncertainty being the radiation field energy density. In addition, we present a method of deriving constraints on the dark matter distribution profile from the diffuse γ-ray spectra. These results critically depend on the assumed mass of the dark matter particles and the type of its end annihilation products.

  3. Modeling dark matter subhalos in a constrained galaxy: Global mass and boosted annihilation profiles

    Science.gov (United States)

    Stref, Martin; Lavalle, Julien

    2017-03-01

    The interaction properties of cold dark matter (CDM) particle candidates, such as those of weakly interacting massive particles (WIMPs), generically lead to the structuring of dark matter on scales much smaller than typical galaxies, potentially down to ˜10-10 M⊙ . This clustering translates into a very large population of subhalos in galaxies and affects the predictions for direct and indirect dark matter searches (gamma rays and antimatter cosmic rays). In this paper, we elaborate on previous analytic works to model the Galactic subhalo population, while keeping consistent with current observational dynamical constraints on the Milky Way. In particular, we propose a self-consistent method to account for tidal effects induced by both dark matter and baryons. Our model does not strongly rely on cosmological simulations, as they can hardly be fully matched to the real Milky Way, apart from setting the initial subhalo mass fraction. Still, it allows us to recover the main qualitative features of simulated systems. It can further be easily adapted to any change in the dynamical constraints, and can be used to make predictions or derive constraints on dark matter candidates from indirect or direct searches. We compute the annihilation boost factor, including the subhalo-halo cross product. We confirm that tidal effects induced by the baryonic components of the Galaxy play a very important role, resulting in a local average subhalo mass density ≲1 % of the total local dark matter mass density, while selecting the most concentrated objects and leading to interesting features in the overall annihilation profile in the case of a sharp subhalo mass function. Values of global annihilation boost factors range from ˜2 to ˜20 , while the local annihilation rate is about boosted half as much.

  4. Probability Density Analysis of SINR in Massive MIMO Downlink Using Matched Filter Beamformer%基于MF预编码的大规模MIMO网络SINR概率密度分析

    Institute of Scientific and Technical Information of China (English)

    束锋; 李隽; 顾晨; 王进; 周叶; 徐彦青; 钱玉文

    2015-01-01

    大规模M IM O系统中,相对于其他基于信道矩阵分解的波束成形算法,如迫零、最小均方误差算法等,匹配滤波(Matched filter ,MF)具有复杂度极低的优点,从而成为一种极具实用潜力的波束成形算法。鉴于此,本文推导了基站采用M F波束成形算法时,用户端信干噪比(Signal‐to‐interference‐and‐noise ratio ,SINR)的近似概率密度函数(Probability density function ,PDF)。该函数对于推导与分析系统性能,如和速率、中断概率等至关重要。仿真表明:当基站天线数趋于大规模时,SINR公式的PDF曲线趋近于通过纯仿真得到的PDF曲线。%In massive MIMO systems ,the matched filter (MF) beamformer is an attractive technique due to its extremely low complexity compared with the high‐complexity decomposition‐based beamforming techniques ,such as zero forcing and minimum mean square error .An approximate formula is derived for probability density function (PDF) of the signal‐to‐interference‐and‐noise ratio (SINR) at user terminal when multiple antennas and the MF beamformer are used at the base station .The formula is important in calculating or analyzing system performance such as sum‐rate and outage probability .Simulations exhibit that the difference between the derived approximate formula for PDF and the simulated PDF approaches zero while the number of antennas at the base station tends to large‐scale .

  5. The Observed Galactic Annihilation Line. Possible Signature of the Cluster for Accreting Small Mass Black Holes

    CERN Document Server

    Titarchuk, L; Titarchuk, Lev; Chardonnet, Pascal

    2006-01-01

    Compton Gamma Ray Observatory, OSSE, SMM, TGRS, balloon and recent INTEGRAL data reveal a feature of the 0.511 MeV annihilation radiation of the Galactic Center with a flux of approximately 5x 10^{-4}~0.511 MeV photons cm^{-2} s^{-1}. We argue that e+e- pairs can be generated when the X-ray radiation photons and ~10-30 MeV photons interact with each other in the compact region in the proximity of the Galactic Center black hole. In fact, disks formed near black holes of 10^{17} g mass should emit the ~ 10 MeV temperature blackbody radiation. If positron e+ sources are producing about 10^{42} e+ s^{-1} near the Galactic Center they would annihilate on the way out and result in 0.511 MeV emission. We suggest that the annihilation radiation can be an observational consequence of the interaction of the accretion disk radiation of the SMall Mass Black Holes (SMMBHs) with X-ray radiation in the Galactic Center. This is probably the only way to identify and observe these SMMBHs.

  6. Probability theory and mathematical statistics for engineers

    CERN Document Server

    Pugachev, V S

    1984-01-01

    Probability Theory and Mathematical Statistics for Engineers focuses on the concepts of probability theory and mathematical statistics for finite-dimensional random variables.The publication first underscores the probabilities of events, random variables, and numerical characteristics of random variables. Discussions focus on canonical expansions of random vectors, second-order moments of random vectors, generalization of the density concept, entropy of a distribution, direct evaluation of probabilities, and conditional probabilities. The text then examines projections of random vector

  7. State-selective high-energy excitation of nuclei by resonant positron annihilation

    Directory of Open Access Journals (Sweden)

    Nikolay A. Belov

    2015-02-01

    Full Text Available In the annihilation of a positron with a bound atomic electron, the virtual γ photon created may excite the atomic nucleus. We put forward this effect as a spectroscopic tool for an energy-selective excitation of nuclear transitions. This scheme can efficiently populate nuclear levels of arbitrary multipolarities in the MeV regime, including giant resonances and monopole transitions. In certain cases, it may have higher cross sections than the conventionally used Coulomb excitation and it can even occur with high probability when the latter is energetically forbidden.

  8. Holographic Vortex Pair Annihilation in Superfluid Turbulence

    CERN Document Server

    Du, Yiqiang; Tian, Yu; Zhang, Hongbao

    2014-01-01

    We make a first principles investigation of the dynamical evolution of vortex number in a two-dimensional (2D) turbulent superfluid by holography through numerically solving its highly non-trivial gravity dual. With the randomly placed vortices and antivortices prepared as initial states, we find that the temporal evolution of the vortex number can be well fit statistically by two-body decay due to the vortex pair annihilation featured relaxation process remarkably from a very early time on. In particular, subtracted by the universal offset, the power law fit indicates that our holographic turbulent superfluid exhibits an apparently different decay pattern from the superfluid recently experimented in highly oblate Bose-Einstein condensates.

  9. One-photon pair annihilation in magnetized relativistic plasmas

    Science.gov (United States)

    Harding, A. K.

    1986-01-01

    In supersonic magnetic fields, electron-positron pairs may annihilate into single photons producing spectral features above 1 MeV. The paper calculates the exact one-photon annihilation rate in the general case where pairs may annihilate from excited Landau states, extending the previous studies which were restricted to pairs in the ground state. Asymptotic expressions for annihilation spectra and rates in the limit of large pair quantum numbers are also derived. It is found that the rate of annihilation from excited states can exceed the rate from the ground state by orders of magnitude in fields less than about 2 x 10 to the 12th G. This allows one-photon annihilation to be competitive with the two-photon process at typical neutron star field strengths. Annihilation spectra from a Maxwellian pair plasma at transrelativistic temperatures show fine structure near threshold on a scale (h/2pi)omega sub B as the result of contributions from individual pair states, which blend into a smooth continuum at higher energies.

  10. Pair Production and Annihilation in Strong Magnetic Fields

    Science.gov (United States)

    Daugherty, J. K.; Harding, A. K.

    1983-01-01

    Electromagnetic phenomena occurring in the presence of strong magnetic fields are currently of great interest in high-energy astrophysics. In particular, the process of pair production by single photons in the presence of fields of order 10 to the 12th power Gauss is of importance in cascade models of pulsar gamma ray emission, and may also become significant in theories of other radiation phenomena whose sources may be neutron stars (e.g., gamma ray bursts). In addition to pair production, the inverse process of pair annihilation is greatly affected by the presence of superstrong magnetic fields. The most significant departures from annihilation processes in free space are a reduction in the total rate for annihilation into two photons, a broadening of the familiar 511-keV line for annihilation at rest, and the possibility for annihilation into a single photon (which dominates the two-photon annihilation for B ( 10 the 13th power Gauss). The physics of these pair conversion processes, which is reviewed briefly, can become quite complex in the teragauss regime, and can involve calculations which are technically difficult to incorporate into models of emission mechanisms in neutron star magnetospheres. However, theoretical work, especially the case of pair annihilation, also suggests potential techniques for more direct measurements of field strengths near the stellar surface.

  11. Extragalactic Inverse Compton Light from Dark Matter Annihilation and the Pamela Positron Excess

    CERN Document Server

    Profumo, Stefano

    2009-01-01

    We calculate the extragalactic diffuse emission originating from the up-scattering of cosmic microwave photons by energetic electrons and positrons produced in particle dark matter annihilation events at all redshifts and in all halos. We outline the observational constraints on this emission and we study its dependence on both the particle dark matter model (including the particle mass and its dominant annihilation final state) and on assumptions on structure formation and on the density profile of halos. We find that for low-mass dark matter models, data in the X-ray band provide the most stringent constraints, while the gamma-ray energy range probes models featuring large masses and pair-annihilation rates, and a hard spectrum for the injected electrons and positrons. Specifically, we point out that the all-redshift, all-halo inverse Compton emission from many dark matter models that might provide an explanation to the anomalous positron fraction measured by the Pamela payload severely overproduces the obs...

  12. Extragalactic Inverse Compton Light from Dark Matter annihilation and the Pamela positron excess

    Energy Technology Data Exchange (ETDEWEB)

    Profumo, Stefano [Department of Physics, University of California, 1156 High St, Santa Cruz, CA 95064 (United States); Jeltema, Tesla E., E-mail: profumo@scipp.ucsc.edu, E-mail: tesla@ucolick.org [UCO/Lick Observatories, 1156 High St, Santa Cruz, CA 95064 (United States)

    2009-07-01

    We calculate the extragalactic diffuse emission originating from the up-scattering of cosmic microwave photons by energetic electrons and positrons produced in particle dark matter annihilation events at all redshifts and in all halos. We outline the observational constraints on this emission and we study its dependence on both the particle dark matter model (including the particle mass and its dominant annihilation final state) and on assumptions on structure formation and on the density profile of halos. We find that for low-mass dark matter models, data in the X-ray band provide the most stringent constraints, while the gamma-ray energy range probes models featuring large masses and pair-annihilation rates, and a hard spectrum for the injected electrons and positrons. Specifically, we point out that the all-redshift, all-halo inverse Compton emission from many dark matter models that might provide an explanation to the anomalous positron fraction measured by the Pamela payload severely overproduces the observed extragalactic gamma-ray background.

  13. A new scalar mediated WIMPs with pairs of on-shell mediators in annihilations

    CERN Document Server

    Jia, Lian-Bao

    2016-01-01

    In this article, we focus on a new scalar $\\phi$ mediated scalar/vectoral WIMPs (weakly interacting massive particles). To explain the Galactic center 1 - 3 GeV gamma-ray excess, here we consider the case that a WIMP pair predominantly annihilates into an on-shell $\\phi \\phi$ pair which mainly decays to $\\tau \\bar{\\tau}$, with masses of WIMPs in a range about 14 - 22 GeV. For the mass of $\\phi$ slightly below the WIMP mass, the annihilations of WIMPs are phase space suppressed today, and the required thermally averaged annihilation cross section of WIMPs can be derived to meet the GeV gamma-ray excess. A small scalar mediator-Higgs field mixing is introduced, which is available in interpreting the GeV gamma-ray excess. With the constraints of the dark matter relic density, the indirect detection result, the collider experiment, the thermal equilibrium of the early universe and the dark matter direct detection experiment are considered, we find there are parameter spaces left. The WIMPs may be detectable at th...

  14. Clustering in the Phase Space of Dark Matter Haloes. II. Stable Clustering and Dark Matter Annihilation

    CERN Document Server

    Zavala, Jesus

    2013-01-01

    We present a model for the structure of the two-dimensional particle phase space average density ($P^2SAD$) in galactic haloes, introduced recently as a novel measure of the clustering of dark matter (arXiv:1308.1098). Our model is based on the stable clustering hypothesis in phase space, the spherical collapse model, and tidal disruption of substructures, which is calibrated against the high resolution Aquarius simulations. Using this physically motivated model, we are able to predict the behaviour of ($P^2SAD$) in the numerically unresolved regime, down to the decoupling mass limit of generic WIMP models. This prediction can be used to estimate signals sensitive to the small scale structure of dark matter distributions. For example, the dark matter annihilation rate is an integral over relative velocities of the product of a limit of $P^2SAD$ to zero separation in physical space, and the annihilation cross section times the relative velocity. This provides a convenient way to estimate the annihilation rate ...

  15. Neutrino fluxes from constrained minimal supersymmetric standard model lightest supersymmetric particle annihilations in the Sun

    CERN Document Server

    Ellis, John; Savage, Christopher; Spanos, Vassilis C

    2010-01-01

    We evaluate the neutrino fluxes to be expected from neutralino LSP annihilations inside the Sun, within the minimal supersymmetric extension of the Standard Model with supersymmetry-breaking scalar and gaugino masses constrained to be universal at the GUT scale (the CMSSM). We find that there are large regions of typical CMSSM $(m_{1/2}, m_0)$ planes where the LSP density inside the Sun is not in equilibrium, so that the annihilation rate may be far below the capture rate. We show that neutrino fluxes are dependent on the solar model at the 20% level, and adopt the AGSS09 model of Serenelli et al. for our detailed studies. We find that there are large regions of the CMSSM $(m_{1/2}, m_0)$ planes where the capture rate is not dominated by spin-dependent LSP-proton scattering, e.g., at large $m_{1/2}$ along the CMSSM coannihilation strip. We calculate neutrino fluxes above various threshold energies for points along the coannihilation/rapid-annihilation and focus-point strips where the CMSSM yields the correct ...

  16. Neutrino scattering, absorption and annihilation above the accretion discs of gamma ray bursts

    Energy Technology Data Exchange (ETDEWEB)

    Kneller, J P [Department of Physics, North Carolina State University, Raleigh, NC 27695 (United States); School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455 (United States); McLaughlin, G C [Department of Physics, North Carolina State University, Raleigh, NC 27695 (United States); Surman, R A [Department of Physics, Union College, Schenectady, NY 12308 (United States)

    2006-04-01

    The central engine that drives gamma ray burst (GRB) explosions may derive from the ability of electrons/positrons and nucleons to tap into the momentum and energy from the large neutrino luminosity emitted by an accretion disc surrounding a black hole. This transfer of momentum and energy occurs due to neutrino absorption, scattering and annihilation, and the non-spherical geometry of the source both increases the annihilation efficiency and, close to the black hole, directs the momentum transfer toward the disc axis. We focus on the micro-physical aspects of this system and present annihilation efficiencies and the momentum/energy transfers for a number of accretion disc models. Models in which the neutrinos and antineutrinos become trapped within the disc have noticeably different momentum and energy deposition structure compared to thin disc models that may lead to significant differences in the explosion dynamics. Using these results we make estimates for the critical densities of infalling material below which the transfer of neutrino momentum/energy will lead to an explosion.

  17. Optical and microstructural characterization of porous silicon using photoluminescence, SEM and positron annihilation spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, C K [Department of Physics, University of Hong Kong, Pokfulam Road, Hong Kong (China); Nahid, F [Department of Physics, University of Hong Kong, Pokfulam Road, Hong Kong (China); Cheng, C C [Department of Physics, University of Hong Kong, Pokfulam Road, Hong Kong (China); Beling, C D [Department of Physics, University of Hong Kong, Pokfulam Road, Hong Kong (China); Fung, S [Department of Physics, University of Hong Kong, Pokfulam Road, Hong Kong (China); Ling, C C [Department of Physics, University of Hong Kong, Pokfulam Road, Hong Kong (China); Djurisic, A B [Department of Physics, University of Hong Kong, Pokfulam Road, Hong Kong (China); Pramanik, C [Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata 700032 (India); Saha, H [Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata 700032 (India); Sarkar, C K [Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata 700032 (India)

    2007-12-05

    We have studied the dependence of porous silicon morphology and porosity on fabrication conditions. N-type (100) silicon wafers with resistivity of 2-5 {omega} cm were electrochemically etched at various current densities and anodization times. Surface morphology and the thickness of the samples were examined by scanning electron microscopy (SEM). Detailed information of the porous silicon layer morphology with variation of preparation conditions was obtained by positron annihilation spectroscopy (PAS): the depth-defect profile and open pore interconnectivity on the sample surface has been studied using a slow positron beam. Coincidence Doppler broadening spectroscopy (CDBS) was used to study the chemical environment of the samples. The presence of silicon micropores with diameter varying from 1.37 to 1.51 nm was determined by positron lifetime spectroscopy (PALS). Visible luminescence from the samples was observed, which is considered to be a combination effect of quantum confinement and the effect of Si = O double bond formation near the SiO{sub 2}/Si interface according to the results from photoluminescence (PL) and positron annihilation spectroscopy measurements. The work shows that the study of the positronium formed when a positron is implanted into the porous surface provides valuable information on the pore distribution and open pore interconnectivity, which suggests that positron annihilation spectroscopy is a useful tool in the porous silicon micropores' characterization.

  18. Multi-photon creation and single-photon annihilation of electron-positron pairs

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Huayu

    2011-04-27

    In this thesis we study multi-photon e{sup +}e{sup -} pair production in a trident process, and singlephoton e{sup +}e{sup -} pair annihilation in a triple interaction. The pair production is considered in the collision of a relativistic electron with a strong laser beam, and calculated within the theory of laser-dressed quantum electrodynamics. A regularization method is developed systematically for the resonance problem arising in the multi-photon process. Total production rates, positron spectra, and relative contributions of different reaction channels are obtained in various interaction regimes. Our calculation shows good agreement with existing experimental data from SLAC, and adds further insights into the experimental findings. Besides, we study the process in a manifestly nonperturbative domain, whose accessibility to future all-optical experiments based on laser acceleration is shown. In the single-photon e{sup +}e{sup -} pair annihilation, the recoil momentum is absorbed by a spectator particle. Various kinematic configurations of the three incoming particles are examined. Under certain conditions, the emitted photon exhibits distinct angular and polarization distributions which could facilitate the detection of the process. Considering an equilibrium relativistic e{sup +}e{sup -} plasma, it is found that the single-photon process becomes the dominant annihilation channel for plasma temperatures above 3 MeV. Multi-particle correlation effects are therefore essential for the e{sup +}e{sup -} dynamics at very high density. (orig.)

  19. Current-induced spin polarization on a Pt surface: A new approach using spin-polarized positron annihilation spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Kawasuso, A., E-mail: kawasuso.atsuo@jaea.go.jp [Advanced Science Research Center, Japan Atomic Energy Agency, 1233 Watanuki, Takasaki, Gunma 370-1292 (Japan); Fukaya, Y.; Maekawa, M.; Zhang, H. [Advanced Science Research Center, Japan Atomic Energy Agency, 1233 Watanuki, Takasaki, Gunma 370-1292 (Japan); Seki, T.; Yoshino, T.; Saitoh, E.; Takanashi, K. [Institute for Materials Research, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577 (Japan)

    2013-09-15

    Transversely spin-polarized positrons were injected near Pt and Au surfaces under an applied electric current. The three-photon annihilation of spin-triplet positronium, which was emitted from the surfaces into vacuum, was observed. When the positron spin polarization was perpendicular to the current direction, the maximum asymmetry of the three-photon annihilation intensity was observed upon current reversal for the Pt surfaces, whereas it was significantly reduced for the Au surface. The experimental results suggest that electrons near the Pt surfaces were in-plane and transversely spin-polarized with respect to the direction of the electric current. The maximum electron spin polarization was estimated to be more than 0.01 (1%). - Highlights: • Annihilation probability of positronium emitted from the Pt surface into the vacuum under direct current exhibited asymmetry upon current reversal. • The maximum asymmetry appeared when positron spin polarization and the direct current were perpendicular to each other. • Electrons near the Pt surfaces were in-plane and transversely spin-polarized with respect to the direction of the electric current. • Spin-polarized positronium annihilation provides a unique tool for investigating spin polarization on metal surfaces.

  20. Analysis of Probability Density of Pressure Fluctuation Signal’ s Spectral Energy in Fluidized Bed%流化床压力脉动信号谱能量的概率密度分析

    Institute of Scientific and Technical Information of China (English)

    周云龙; 王芳

    2014-01-01

    在自行搭建冷态流化床实验台的基础上,采集了不同压力测点处的压力脉动信号。通过对比、分析流化床风帽入口与床壁测点处压力信号的频谱图,得到了不同压力测点传达相同流动特性信息的结论。结合Welch算法的谱估计和t分布下的概率密度函数,对所采集到的压力脉动信号进行累积概率分布的对比和分析。结果表明:在压力信号Welch谱估计的能量累积概率分布图中,能够很容易地观察到流化床内流动状态的变化,为以后识别多相流流型提供了新的途径。%Basing on the self-designed fluidized bed experimental rig, the pressure fluctuation signals at differ-ent pressure test points were collected.Through comparing and analyzing the recurrence plots of pressure sig-nals at different test points, the conclusion that pressure signals at different test points convey the same infor-mation of flow characteristics was reached.Having the probability density function under the Student’ s t-distri-bution combined with the spectral estimation of Welch algorithm to contrast and analyze the cumulative proba-bility distributions of pressure fluctuant signals shows that, in the energy cumulative probability distribution map of Welch spectrum estimation, the changing of flow state in fluidized bed can be easily observed, this pro-vides a new approach to the identification of multiphase flow.

  1. Constraints on dark matter annihilation in clusters of galaxies with the Fermi large area telescope

    Energy Technology Data Exchange (ETDEWEB)

    Ackermann, M.; Ajello, M.; Allafort, A.; Bechtol, K.; Blandford, R.D.; Bloom, E.D.; Borgland, A.W.; Bouvier, A.; Buehler, R. [W.W. Hansen Experimental Physics Laboratory, Kavli Institute for Particle Astrophysics and Cosmology, Department of Physics and SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94305 (United States); Baldini, L.; Bellazzini, R.; Bregeon, J. [Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, I-56127 Pisa (Italy); Ballet, J. [Laboratoire AIM, CEA-IRFU/CNRS/Université Paris Diderot, Service d' Astrophysique, CEA Saclay, 91191 Gif sur Yvette (France); Barbiellini, G. [Istituto Nazionale di Fisica Nucleare, Sezione di Trieste, I-34127 Trieste (Italy); Bastieri, D.; Buson, S. [Istituto Nazionale di Fisica Nucleare, Sezione di Padova, I-35131 Padova (Italy); Bonamente, E. [Istituto Nazionale di Fisica Nucleare, Sezione di Perugia, I-06123 Perugia (Italy); Brandt, T.J. [Centre d' Étude Spatiale des Rayonnements, CNRS/UPS, BP 44346, F-30128 Toulouse Cedex 4 (France); Brigida, M. [Dipartimento di Fisica ' ' M. Merlin' ' dell' Università e del Politecnico di Bari, I-70126 Bari (Italy); Bruel, P., E-mail: tesla@ucolick.org, E-mail: profumo@scipp.ucsc.edu [Laboratoire Leprince-Ringuet, École polytechnique, CNRS/IN2P3, Palaiseau (France); and others

    2010-05-01

    Nearby clusters and groups of galaxies are potentially bright sources of high-energy gamma-ray emission resulting from the pair-annihilation of dark matter particles. However, no significant gamma-ray emission has been detected so far from clusters in the first 11 months of observations with the Fermi Large Area Telescope. We interpret this non-detection in terms of constraints on dark matter particle properties. In particular for leptonic annihilation final states and particle masses greater than ∼ 200 GeV, gamma-ray emission from inverse Compton scattering of CMB photons is expected to dominate the dark matter annihilation signal from clusters, and our gamma-ray limits exclude large regions of the parameter space that would give a good fit to the recent anomalous Pamela and Fermi-LAT electron-positron measurements. We also present constraints on the annihilation of more standard dark matter candidates, such as the lightest neutralino of supersymmetric models. The constraints are particularly strong when including the fact that clusters are known to contain substructure at least on galaxy scales, increasing the expected gamma-ray flux by a factor of ∼ 5 over a smooth-halo assumption. We also explore the effect of uncertainties in cluster dark matter density profiles, finding a systematic uncertainty in the constraints of roughly a factor of two, but similar overall conclusions. In this work, we focus on deriving limits on dark matter models; a more general consideration of the Fermi-LAT data on clusters and clusters as gamma-ray sources is forthcoming.

  2. Spectral Gamma-ray Signatures of Cosmological Dark Matter Annihilation

    CERN Document Server

    Bergström, L; Ullio, P; Bergstrom, Lars; Edsjo, Joakim; Ullio, Piero

    2001-01-01

    We propose a new signature for weakly interacting massive particle (WIMP) dark matter, a spectral feature in the diffuse extragalactic gamma-ray radiation. This feature, a sudden drop of the gamma-ray intensity at an energy corresponding to the WIMP mass, comes from the asymmetric distortion of the line due to WIMP annihilation into two gamma-rays caused by the cosmological redshift. Unlike other proposed searches for a line signal, this method is not very sensitive to the exact dark matter density distribution in halos and subhalos. The only requirement is that the mass distribution of substructure on small scales follows approximately the Press-Schechter law, and that smaller halos are on the average denser than large halos, which is a generic outcome of N-body simulations of Cold Dark Matter, and which has observational support. The upcoming Gamma-ray Large Area Space Telescope (GLAST) will be eminently suited to search for these spectral features. For numerical examples, we use rates computed for supersym...

  3. Positron annihilation and magnetic properties studies of copper substituted nickel ferrite nanoparticles

    Science.gov (United States)

    Kargar, Z.; Asgarian, S. M.; Mozaffari, M.

    2016-05-01

    Single phase copper substituted nickel ferrite Ni1-xCuxFe2O4 (x = 0.0, 0.1, 0.3 and 0.5) nanoparticles were synthesized by the sol-gel method. TEM images of the samples confirm formation of nano-sized particles. The Rietveld refinement of the X-ray diffraction patterns showed that lattice constant increase with increase in copper content from 8.331 for x = 0.0 to 8.355 Å in x = 0.5. Cation distribution of samples has been determined by the occupancy factor, using Rietveld refinement. The positron lifetime spectra of the samples were convoluted into three lifetime components. The shortest lifetime is due to the positrons that do not get trapped by the vacancy defects. The second lifetime is ascribed to annihilation of positrons in tetrahedral (A) and octahedral (B) sites in spinel structure. It is seen that for x = 0.1 and 0.3 samples, positron trapped within vacancies in A sites, but for x = 0.0 and 0.5, the positrons trapped and annihilated within occupied B sites. The longest lifetime component attributed to annihilation of positrons in the free volume between nanoparticles. The obtained results from coincidence Doppler broadening spectroscopy (CDBS) confirmed the results of positron annihilation lifetime spectroscopy (PALS) and also showed that the vacancy clusters concentration for x = 0.3 is more than those in other samples. Average defect density in the samples, determined from mean lifetime of annihilated positrons reflects that the vacancy concentration for x = 0.3 is maximum. The magnetic measurements showed that the saturation magnetization for x = 0.3 is maximum that can be explained by Néel's theory. The coercivity in nanoparticles increased with increase in copper content. This increase is ascribed to the change in anisotropy constant because of increase of the average defect density due to the substitution of Cu2+ cations and magnetocrystalline anisotropy of Cu2+ cations. Curie temperature of the samples reduces with increase in copper content which

  4. Probability density functions characterizing PSC particle size distribution parameters for NAT and STS derived from in situ measurements between 1989 and 2010 above McMurdo Station, Antarctica, and between 1991-2004 above Kiruna, Sweden

    Science.gov (United States)

    Deshler, Terry

    2016-04-01

    Balloon-borne optical particle counters were used to make in situ size resolved particle concentration measurements within polar stratospheric clouds (PSCs) over 20 years in the Antarctic and over 10 years in the Arctic. The measurements were made primarily during the late winter in the Antarctic and in the early and mid-winter in the Arctic. Measurements in early and mid-winter were also made during 5 years in the Antarctic. For the analysis bimodal lognormal size distributions are fit to 250 meter averages of the particle concentration data. The characteristics of these fits, along with temperature, water and nitric acid vapor mixing ratios, are used to classify the PSC observations as either NAT, STS, ice, or some mixture of these. The vapor mixing ratios are obtained from satellite when possible, otherwise assumptions are made. This classification of the data is used to construct probability density functions for NAT, STS, and ice number concentration, median radius and distribution width for mid and late winter clouds in the Antarctic and for early and mid-winter clouds in the Arctic. Additional analysis is focused on characterizing the temperature histories associated with the particle classes and the different time periods. The results from theses analyses will be presented, and should be useful to set bounds for retrievals of PSC properties from remote measurements, and to constrain model representations of PSCs.

  5. Determination of the 3\\gamma fraction from positron annihilation in mesoporous materials for symmetry violation experiment with J-PET scanner

    CERN Document Server

    Jasińska, B; Wiertel, M; Zaleski, R; Alfs, D; Bednarski, T; Białas, P; Czerwiński, E; Dulski, K; Gajos, A; Głowacz, B; Kamińska, D; Kapłon, Ł; Korcyl, G; Kowalski, P; Kozik, T; Krzemień, W; Kubicz, E; Mohammed, M; Niedźwiecki, Sz; Pałka, M; Raczyński, L; Rudy, Z; Rundel, O; Sharma, N G; Silarski, M; Słomski, A; Strzelecki, A; Wieczorek, A; Wiślicki, W; Zgardzińska, B; Zieliński, M; Moskal, P

    2016-01-01

    Various mesoporous materials were investigated to choose the best material for experiments requiring high yield of long-lived positronium. We found that the fraction of 3\\gamma annihilation determined using \\gamma-ray energy spectra and positron annihilation lifetime spectra (PAL) changed from 20% to 25%. The 3gamma fraction and o-Ps formation probability in the polymer XAD-4 is found to be the largest. Elemental analysis performed using scanning electron microscop (SEM) equipped with energy-dispersive X-ray spectroscop EDS show high purity of the investigated materials.

  6. Antiproton annihilation physics in the Monte Carlo particle transport code SHIELD-HIT12A

    Energy Technology Data Exchange (ETDEWEB)

    Taasti, Vicki Trier; Knudsen, Helge [Dept. of Physics and Astronomy, Aarhus University (Denmark); Holzscheiter, Michael H. [Dept. of Physics and Astronomy, Aarhus University (Denmark); Dept. of Physics and Astronomy, University of New Mexico (United States); Sobolevsky, Nikolai [Institute for Nuclear Research of the Russian Academy of Sciences (INR), Moscow (Russian Federation); Moscow Institute of Physics and Technology (MIPT), Dolgoprudny (Russian Federation); Thomsen, Bjarne [Dept. of Physics and Astronomy, Aarhus University (Denmark); Bassler, Niels, E-mail: bassler@phys.au.dk [Dept. of Physics and Astronomy, Aarhus University (Denmark)

    2015-03-15

    The Monte Carlo particle transport code SHIELD-HIT12A is designed to simulate therapeutic beams for cancer radiotherapy with fast ions. SHIELD-HIT12A allows creation of antiproton beam kernels for the treatment planning system TRiP98, but first it must be benchmarked against experimental data. An experimental depth dose curve obtained by the AD-4/ACE collaboration was compared with an earlier version of SHIELD-HIT, but since then inelastic annihilation cross sections for antiprotons have been updated and a more detailed geometric model of the AD-4/ACE experiment was applied. Furthermore, the Fermi–Teller Z-law, which is implemented by default in SHIELD-HIT12A has been shown not to be a good approximation for the capture probability of negative projectiles by nuclei. We investigate other theories which have been developed, and give a better agreement with experimental findings. The consequence of these updates is tested by comparing simulated data with the antiproton depth dose curve in water. It is found that the implementation of these new capture probabilities results in an overestimation of the depth dose curve in the Bragg peak. This can be mitigated by scaling the antiproton collision cross sections, which restores the agreement, but some small deviations still remain. Best agreement is achieved by using the most recent antiproton collision cross sections and the Fermi–Teller Z-law, even if experimental data conclude that the Z-law is inadequately describing annihilation on compounds. We conclude that more experimental cross section data are needed in the lower energy range in order to resolve this contradiction, ideally combined with more rigorous models for annihilation on compounds.

  7. Baryon production in $e^{+}e^{-}$-annihilation at PETRA

    CERN Document Server

    Bartel, Wulfrin; Dittmann, P; Eichler, R; Felst, R; Haidt, Dieter; Krehbiel, H; Meier, K; Naroska, Beate; O'Neill, L H; Steffen, P; Wenninger, Horst; Zhang, Y; Elsen, E E; Helm, M; Petersen, A; Warming, P; Weber, G; Bethke, Siegfried; Drumm, H; Heintze, J; Heinzelmann, G; Hellenbrand, K H; Heuer, R D; Von Krogh, J; Lennert, P; Kawabata, S; Matsumura, H; Nozaki, T; Olsson, J; Rieseberg, H; Wagner, A; Bell, A; Foster, F; Hughes, G; Wriedt, H; Allison, J; Ball, A H; Bamford, G; Barlow, R; Bowdery, C K; Duerdoth, I P; Hassard, J F; King, B T; Loebinger, F K; MacBeth, A A; McCann, H; Mills, H E; Murphy, P G; Prosper, H B; Stephens, K; Clarke, D; Goddard, M C; Marshall, R; Pearce, G F; Kobayashi, T; Komamiya, S; Koshiba, M; Minowa, M; Nozaki, M; Orito, S; Sato, A; Suda, T; Takeda, H; Totsuka, Y; Watanabe, Y; Yamada, S; Yanagisawa, C

    1981-01-01

    Data on p and Lambda production by e/sup +/e/sup -/-annihilation at CM energies between 30 and 36 GeV are presented. Indication for an angular anticorrelation in events with baryon-antibaryon pairs is seen.

  8. Aspects of meson spectroscopy with N N annihilation

    Energy Technology Data Exchange (ETDEWEB)

    Dover, C.B.

    1990-04-01

    We focus on the potentialities of nucleon-antinucleon ({bar N}N) annihilation as a means of producing new mesonic states. The case for the existence of quasinuclear {bar N}N bound states is discussed in detail. Strong evidence for a 2{sup ++}(0{sup +}) state of this type has been obtained at LEAR in annihilation from the p-wave (L = 1) {bar N}N system, in support of earlier sightings of this object in L = 0 annihilation at Brookhaven. In the next generation of LEAR experiments, the emphasis shifts to the search for mesons containing dynamical excitations of the gluonic field, namely glueballs and hybrids (Q{bar Q}g). We discuss some features of the masses, decay branching ratios and production mechanisms for these states, and suggest particular {bar N}N annihilation channels which are optimal for their discovery. 59 refs., 15 figs.

  9. Positron-molecule interactions: resonant attachment, annihilation, and bound states

    CERN Document Server

    Gribakin, G F; Surko, C M; 10.1103/RevModPhys.82.2557

    2010-01-01

    This article presents an overview of current understanding of the interaction of low-energy positrons with molecules with emphasis on resonances, positron attachment and annihilation. Annihilation rates measured as a function of positron energy reveal the presence of vibrational Feshbach resonances (VFR) for many polyatomic molecules. These resonances lead to strong enhancement of the annihilation rates. They also provide evidence that positrons bind to many molecular species. A quantitative theory of VFR-mediated attachment to small molecules is presented. It is tested successfully for selected molecules (e.g., methyl halides and methanol) where all modes couple to the positron continuum. Combination and overtone resonances are observed and their role is elucidated. In larger molecules, annihilation rates from VFR far exceed those explicable on the basis of single-mode resonances. These enhancements increase rapidly with the number of vibrational degrees of freedom. While the details are as yet unclear, intr...

  10. Neutrinos from WIMP annihilations in the Sun including neutrino oscillations

    Energy Technology Data Exchange (ETDEWEB)

    Blennow, Mattias, E-mail: emb@kth.se [Department of Theoretical Physics, School of Engineering Sciences, Royal Institute of Technology (KTH) - AlbaNova University Center, SE-106 91 Stockholm (Sweden); Edsjoe, Joakim, E-mail: edsjo@physto.se [Department of Physics, Stockholm University - AlbaNova University Center, SE-106 91 Stockholm (Sweden); Ohlsson, Tommy, E-mail: tommy@theophys.kth.se [Department of Theoretical Physics, School of Engineering Sciences, Royal Institute of Technology (KTH) - AlbaNova University Center, SE-106 91 Stockholm (Sweden)

    2011-12-15

    The prospects to detect neutrinos from the Sun arising from dark matter annihilations in the core of the Sun are reviewed. Emphasis is placed on new work investigating the effects of neutrino oscillations on the expected neutrino fluxes.

  11. Probability an introduction

    CERN Document Server

    Goldberg, Samuel

    1960-01-01

    Excellent basic text covers set theory, probability theory for finite sample spaces, binomial theorem, probability distributions, means, standard deviations, probability function of binomial distribution, more. Includes 360 problems with answers for half.

  12. Probability 1/e

    Science.gov (United States)

    Koo, Reginald; Jones, Martin L.

    2011-01-01

    Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.

  13. The Effect of Black Holes in Local Dwarf Spheroidal Galaxies on Gamma-Ray Constraints on Dark Matter Annihilation

    CERN Document Server

    Gonzalez-Morales, Alma X; Queiroz, Farinaldo S

    2014-01-01

    The recent evidence for black holes of intermediate mass in dwarf galaxies motivates the assessment of the resulting effect on the host dark matter density profile, and the consequences for the constraints on the plane of the dark matter annihilation cross section versus mass, stemming from the non-observation of gamma rays from local dwarf spheroidals with the Fermi Large Area Telescope. We compute the density profile using three different prescriptions for the black hole mass associated with a given dwarf galaxy, and taking into account the cutoff to the density from dark matter pair-annihilation. We find that the limits on the dark matter annihilation rate from observations of individual dwarfs are enhanced by factors of a few up to $10^6$, depending on the specific galaxy, on the black hole mass prescription, and on the dark matter particle mass. We estimate limits from combined observations of a sample of 15 dwarfs, for a variety of assumptions on the dwarf black hole mass and on the dark matter density ...

  14. Alternative method for reconstruction of antihydrogen annihilation vertices

    CERN Document Server

    Amole, C; Andresen , G B; Baquero-Ruiz, M; Bertsche, W; Bowe, P D; Butler, E; Cesar, C L; Chapman, S; Charlton, M; Deller, A; Eriksson, S; Fajans, J; Friesen, T; Fujiwara, M C; Gill, D R; Gutierrez, A; Hangst, J S; Hardy, W N; Hayano, R S; Hayden, M E; Humphries, A J; Hydomako, R; Jonsell, S; Kurchaninov, L; Madsen, N; Menary, S; Nolan, P; Olchanski, K; Olin, A; Povilus, A; Pusa, P; Robicheaux, F; Sarid, E; Silveira, D M; So, C; Storey, J W; Thompson, R I; van der Werf, D P; Wurtele, J S; Yamazaki,Y

    2012-01-01

    The ALPHA experiment, located at CERN, aims to compare the properties of antihydrogen atoms with those of hydrogen atoms. The neutral antihydrogen atoms are trapped using an octupole magnetic trap. The trap region is surrounded by a three layered silicon detector used to reconstruct the antiproton annihilation vertices. This paper describes a method we have devised that can be used for reconstructing annihilation vertices with a good resolution and is more efficient than the standard method currently used for the same purpose.

  15. Positorn annihilation study on point defects in lead tungstate

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A study on point defects in lead tungstate (PbWO4) by using positron annihilation lifetime method is presented. The measurement was carried out for the cases of untreated, vacuum-annealed, oxygen- annealed and La-doped PbWO4 crystals. It was found that the components T2, which reflect the positron annihilation in point defects, are different from each other for each case. Some tentative models for the defects are discussed.

  16. Initial State Radiation in Majorana Dark Matter Annihilations

    CERN Document Server

    Ciafaloni, Paolo; Comelli, Denis; De Simone, Andrea; Riotto, Antonio; Urbano, Alfredo

    2011-01-01

    The cross section for a Majorana Dark Matter particle annihilating into light fermions is helicity suppressed. We show that, if the Dark Matter is the neutral Majorana component of a multiplet which is charged under the electroweak interactions of the Standard Model, the emission of gauge bosons from the initial state lifts the suppression and allows an s-wave annihilation. The resulting energy spectra of stable Standard Model particles are importantly affected. This has an impact on indirect searches for Dark Matter.

  17. Significant gamma-ray lines from dark matter annihilation

    Energy Technology Data Exchange (ETDEWEB)

    Duerr, Michael [DESY, Notkestrasse 85, 22607 Hamburg (Germany); Fileviez Perez, Pavel; Smirnov, Juri [Max-Planck-Institut fuer Kernphysik, Saupfercheckweg 1, 69117 Heidelberg (Germany)

    2016-07-01

    Gamma-ray lines from dark matter annihilation are commonly seen as a ''smoking gun'' for the particle nature of dark matter. However, in many dark matter models the continuum background from tree-level annihilations makes such a line invisible. I present two simple extensions of the Standard Model where the continuum contributions are suppressed and the gamma-ray lines are easily visible over the continuum background.

  18. The Characterization of the Gamma-Ray Signal from the Central Milky Way: A Compelling Case for Annihilating Dark Matter

    CERN Document Server

    Daylan, Tansu; Hooper, Dan; Linden, Tim; Portillo, Stephen K N; Rodd, Nicholas L; Slatyer, Tracy R

    2014-01-01

    Past studies have identified a spatially extended excess of ~1-3 GeV gamma rays from the region surrounding the Galactic Center, consistent with the emission expected from annihilating dark matter. We revisit and scrutinize this signal with the intention of further constraining its characteristics and origin. By applying cuts to the Fermi event parameter CTBCORE, we suppress the tails of the point spread function and generate high resolution gamma-ray maps, enabling us to more easily separate the various gamma-ray components. Within these maps, we find the GeV excess to be robust and highly statistically significant, with a spectrum, angular distribution, and overall normalization that is in good agreement with that predicted by simple annihilating dark matter models. For example, the signal is very well fit by a 31-40 GeV dark matter particle annihilating to b quarks with an annihilation cross section of sigma v = (1.4-2.0) x 10^-26 cm^3/s (normalized to a local dark matter density of 0.3 GeV/cm^3). Furtherm...

  19. The Characterization of the Gamma-Ray Signal from the Central Milky Way: A Compelling Case for Annihilating Dark Matter

    Energy Technology Data Exchange (ETDEWEB)

    Daylan, Tansu [Harvard Univ., Cambridge, MA (United States); Finkbeiner, Douglas P. [Harvard-Smithsonian Center, Cambridge, MA (United States); Hooper, Dan [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Linden, Tim [Univ. of Illinois at Chicago, Chicago, IL (United States); Portillo, Stephen K. N. [Harvard-Smithsonian Center, Cambridge, MA (United States); Rodd, Nicholas L. [Massachusetts Institute of Technology, Boston, MA (United States); Slatyer, Tracy R. [Institute for Advanced Study, Princeton, NJ (United States)

    2014-02-26

    Past studies have identified a spatially extended excess of ~1-3 GeV gamma rays from the region surrounding the Galactic Center, consistent with the emission expected from annihilating dark matter. We revisit and scrutinize this signal with the intention of further constraining its characteristics and origin. By applying cuts to the Fermi event parameter CTBCORE, we suppress the tails of the point spread function and generate high resolution gamma-ray maps, enabling us to more easily separate the various gamma-ray components. Within these maps, we find the GeV excess to be robust and highly statistically significant, with a spectrum, angular distribution, and overall normalization that is in good agreement with that predicted by simple annihilating dark matter models. For example, the signal is very well fit by a 31-40 GeV dark matter particle annihilating to b quarks with an annihilation cross section of sigma v = (1.4-2.0) x 10^-26 cm^3/s (normalized to a local dark matter density of 0.3 GeV/cm^3). Furthermore, we confirm that the angular distribution of the excess is approximately spherically symmetric and centered around the dynamical center of the Milky Way (within ~0.05 degrees of Sgr A*), showing no sign of elongation along or perpendicular to the Galactic Plane. The signal is observed to extend to at least 10 degrees from the Galactic Center, disfavoring the possibility that this emission originates from millisecond pulsars.

  20. Search for Dark Matter Annihilation Signals from the Fornax Galaxy Cluster with H.E.S.S

    CERN Document Server

    Abramowski, A; Aharonian, F; Akhperjanian, A G; Anton, G; Balzer, A; Barnacka, A; de Almeida, U Barres; Becherini, Y; Becker, J; Behera, B; Bernlöhr, K; Birsin, E; Biteau, J; Bochow, A; Boisson, C; Bolmont, J; Bordas, P; Brucker, J; Brun, F; Brun, P; Bulik, T; Büsching, I; Carrigan, S; Casanova, S; Cerruti, M; Chadwick, P M; Charbonnier, A; Chaves, R C G; Cheesebrough, A; Clapson, A C; Coignet, G; Cologna, G; Conrad, J; Dalton, M; Daniel, M K; Davids, I D; Degrange, B; Deil, C; Dickinson, H J; Djannati-Ataï, A; Domainko, W; Drury, L O'C; Dubus, G; Dutson, K; Dyks, J; Dyrda, M; Egberts, K; Eger, P; Espigat, P; Fallon, L; Farnier, C; Fegan, S; Feinstein, F; Fernandes, M V; Fiasson, A; Fontaine, G; Förster, A; Füßling, M; Gallant, Y A; Gast, H; Gérard, L; Gerbig, D; Giebels, B; Glicenstein, J F; Glück, B; Goret, P; Göring, D; Häffner, S; Hague, J D; Hampf, D; Hauser, M; Heinz, S; Heinzelmann, G; Henri, G; Hermann, G; Hinton, J A; Hoffmann, A; Hofmann, W; Hofverberg, P; Holler, M; Horns, D; Jacholkowska, A; de Jager, O C; Jahn, C; Jamrozy, M; Jung, I; Kastendieck, M A; Katarzyński, K; Katz, U; Kaufmann, S; Keogh, D; Khangulyan, D; Khélifi, B; Klochkov, D; Kluźniak, W; Kneiske, T; Komin, Nu; Kosack, K; Kossakowski, R; Laffon, H; Lamanna, G; Lennarz, D; Lohse, T; Lopatin, A; Lu, C -C; Marandon, V; Marcowith, A; Masbou, J; Maurin, D; Maxted, N; Mayer, M; McComb, T J L; Medina, M C; Méhault, J; Moderski, R; Moulin, E; Naumann, C L; Naumann-Godo, M; de Naurois, M; Nedbal, D; Nekrassov, D; Nguyen, N; Nicholas, B; Niemiec, J; Nolan, S J; Ohm, S; Wilhelmi, E de Oña; Opitz, B; Ostrowski, M; Oya, I; Panter, M; Arribas, M Paz; Pedaletti1, G; Pelletier, G; Petrucci, P -O; Pita, S; Pühlhofer, G; Punch, M; Quirrenbach, A; Raue, M; Rayner, S M; Reimer, A; Reimer, O; Renaud, M; Reyes, R de los; Rieger, F; Ripken, J; Rob, L; Rosier-Lees, S; Rowell, G; Rudak, B; Rulten, C B; Ruppel, J; Sahakian, V; Sanchez, D A; Santangelo, A; Schlickeiser, R; Schöck, F M; Schulz, A; Schwanke, U; Schwarzburg, S; Schwemmer, S; Sheidaei, F; Skilton, J L; Sol, H; Spengler, G; Stawarz, Ł; Steenkamp, R; Stegmann, C; Stinzing, F; Stycz, K; Sushch, I; Szostek, A; Tavernet, J -P; Terrier, R; Tluczykont, M; Valerius, K; van Eldik, C; Vasileiadis, G; Venter, C; Vialle, J P; Viana, A; Vincent, P; Völk, H J; Volpe, F; Vorobiov, S; Vorster, M; Wagner, S J; Ward, M; White, R; Wierzcholska, A; Zacharias, M; Zajczyk, A; Zdziarski, A A; Zech, A; Zechlin, H -S

    2012-01-01

    The Fornax galaxy cluster was observed with the High Energy Stereoscopic System (H.E.S.S.) for a total live time of 14.5 hours, searching for very-high-energy (VHE, E>100 GeV) gamma-rays from dark matter (DM) annihilation. No significant signal was found in searches for point-like and extended emissions. Using several models of the DM density distribution, upper limits on the DM velocity-weighted annihilation cross-section as a function of the DM particle mass are derived. Constraints are derived for different DM particle models, such as those arising from Kaluza-Klein and supersymmetric models. Various annihilation final states are considered. Possible enhancements of the DM annihilation gamma-ray flux, due to DM substructures of the DM host halo, or from the Sommerfeld effect, are studied. Additional gamma-ray contributions from internal bremsstrahlung and inverse Compton radiation are also discussed. For a DM particle mass of 1 TeV, the exclusion limits at 95% of confidence level reach values of ~ 10^-23...

  1. Raman Cooling of Solids through Photonic Density of States Engineering

    CERN Document Server

    Chen, Yin-Chung

    2015-01-01

    The laser cooling of vibrational states of solids has been achieved through photoluminescence in rare-earth elements, optical forces in optomechanics, and the Brillouin scattering light-sound interaction. The net cooling of solids through spontaneous Raman scattering, and laser refrigeration of indirect band gap semiconductors, both remain unsolved challenges. Here, we analytically show that photonic density of states (DoS) engineering can address the two fundamental requirements for achieving spontaneous Raman cooling: suppressing the dominance of Stokes (heating) transitions, and the enhancement of anti-Stokes (cooling) efficiency beyond the natural optical absorption of the material. We develop a general model for the DoS modification to spontaneous Raman scattering probabilities, and elucidate the necessary and minimum condition required for achieving net Raman cooling. With a suitably engineered DoS, we establish the enticing possibility of refrigeration of intrinsic silicon by annihilating phonons from ...

  2. Positronium in Solids: Computer Simulation of Pick-Off and Self-Annihilation

    Energy Technology Data Exchange (ETDEWEB)

    Bug, A; Muluneh, M; Waldman, J; Sterne, P

    2003-08-01

    Positronium (Ps) is simulated using Path Integral Monte Carlo (PIMC). This method can reproduce the results of previous simple theories in which a single quantum particle is used to represent Ps within an idealized pore. In addition, the calculations treat the e{sup -} and e{sup +} of Ps exactly and realistically model interactions with solid atoms, thereby correcting and extending the simpler theory. They study the pick-off lifetime of o-Ps and the internal contact density, {kappa}, which controls the self-annihilation behavior, for Ps in model voids (spherical pores), defects in a solid (argon), and microporous solids (zeolites).

  3. Sommerfeld enhancement of DM annihilation: resonance structure, freeze-out and CMB spectral bound

    DEFF Research Database (Denmark)

    Hannestad, Steen; Bülow, Thomas Tram

    2011-01-01

    In the last few years there has been some interest in WIMP Dark Matter models featuring a velocity dependent cross section through the Sommerfeld enhancement mechanism, which is a non-relativistic effect due to massive bosons in the dark sector. In the first part of this article, we find analytic....... In the second part of the article we perform a detailed computation of the Dark Matter relic density for models having Sommerfeld enhancement by solving the Boltzmann equation numerically. We calculate the expected distortions of the CMB blackbody spectrum from WIMP annihilations and compare these to the bounds...

  4. Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos I. General framework and S-wave annihilation

    OpenAIRE

    Beneke, M.; Hellmann, C.; Ruiz-Femenia, P.

    2012-01-01

    We compute analytically the tree-level annihilation rates of a collection of non-relativistic neutralino and chargino two-particle states in the general MSSM, including the previously unknown off-diagonal rates. The results are prerequisites to the calculation of the Sommerfeld enhancement in the MSSM, which will be presented in subsequent work. They can also be used to obtain concise analytic expressions for MSSM dark matter pair annihilation in the present Universe for a large number of exc...

  5. Antimatter annihilation detection with AEgIS

    CERN Document Server

    Gligorova, Angela

    2015-01-01

    AE ̄ gIS (Antimatter Experiment: Gravity, Interferometry, Spectroscopy) is an antimatter exper- iment based at CERN, whose primary goal is to carry out the first direct measurement of the Earth’s gravitational acceleration on antimatter. A precise measurement of antimatter gravity would be the first precision test of the Weak Equivalence Principle for antimatter. The principle of the experiment is based on the formation of antihydrogen through a charge exchange reaction between laser excited (Rydberg) positronium and ultra-cold antiprotons. The antihydrogen atoms will be accelerated by an inhomogeneous electric field (Stark acceleration) to form a pulsed cold beam. The free fall of the antihydrogen due to Earth’s gravity will be measured using a moiré de- flectometer and a hybrid position detector. This detector is foreseen to consist of an active silicon part, where the annihilation of antihydrogen takes place, followed by an emulsion part coupled to a fiber time-of-flight detector. This overview prese...

  6. On the Annihilation Rate of WIMPs

    CERN Document Server

    Baumgart, Matthew; Vaidya, Varun

    2014-01-01

    We develop a formalism that allows one to systematically calculate the WIMP annihilation rate into gamma rays whose energy far exceeds the weak scale. A factorization theorem is presented which separates the radiative corrections stemming from initial state potential interactions from loops involving the final state. This separation allows us to go beyond the fixed the order calculation, which is polluted by large infrared logarithms. For the case of Majorana WIMPs transforming in the adjoint representation of SU(2), we present the result for the resummed rate at leading double log accuracy in terms of two initial state partial wave matrix elements and one hard matching coefficient. For a given model, one may calculate the cross section by calculating the tree level matching coefficient and determining the value of a local four fermion operator. We find that the effects of resummation can be as large as 100% for a 20 TeV WIMP. The generalization of the formalism to other types of WIMPs is discussed.

  7. MÉTODOS DISCRETOS Y CONTINUOS PARA MODELAR LA DENSIDAD DE PROBABILIDAD DE LA VOLATILIDAD ESTOCÁSTICA DE LOS RENDIMIENTOS DE SERIES FINANCIERAS DISCRETE AND CONTINUOUS METHODS FOR MODELING FINANCIAL SERIES YIELDING STOCHASTIC VOLATILITY PROBABILITY DENSITY

    Directory of Open Access Journals (Sweden)

    Carlos Alexánder Grajales Correa

    2007-07-01

    Full Text Available En este trabajo se consideran los rendimientos diarios de un activo financiero con el propósito de modelar y comparar la densidad de probabilidad de la volatilidad estocástica de los retornos. Para tal fin, se proponen los modelos ARCH y sus extensiones, que son en tiempo discreto, así como un modelo empírico de volatilidad estocástica, desarrollado por Paul Wilmott. Para el caso discreto se muestran los modelos que permiten estimar la volatilidad condicional heterocedástica en un instante t del tiempo, t∈[1,T]. En el caso continuo se asocia un proceso de difusión de Itô a la volatilidad estocástica de la serie financiera, lo cual posibilita discretizar dicho proceso y simularlo para obtener densidades de probabilidad empíricas de la volatilidad. Finalmente se ilustran y se comparan los resultados obtenidos con las metodologías expuestas para el caso de las series financieras S&P 500 de EEUU, el Índice de Precios y Cotizaciones de la Bolsa Mexicana de Valores (IPC y el IGBC de Colombia.This work considers daily yields of financial assets in order to model and compare returns stochastic volatility probability density. For such aim, ARCH models and its extensions are proposed - they are in discrete time- as well as an Empirical Stochastic Volatility Model, developed by Paul Wilmott. For the discrete case, models that allow to estimate heteroscedasticity conditional volatility in a time, t, t,t∈[1,T], are shown. In the continuous case, there is an association of an Itô diffusion process to stochastic volatility of the financial series, which allows to write a discretization of this process and to simulate it to obtain empirical probabilistic densities from the volatility. Finally the results are illustrated and compared with methodologies exposed by the case of the financial series S&P 500 of the U.S.A., Index of Prices and Quotations of stock-market Mexican of Values (IPC and IGBC of Colombia.

  8. 基于概率密度演化的渡槽结构抗震分析%Seismic Analysis of Large-scale Aqueduct Structures Based on the Probability Density Evolution Method

    Institute of Scientific and Technical Information of China (English)

    曾波; 邢彦富; 刘章军

    2014-01-01

    Using the orthogonal expansion method of random processes,the non-stationary seismic acceleration process is represented as a linear combination of the standard orthogonal basis func-tions and the standard orthogonal random variables.Then,using the random function,these stand-ard orthogonal random variables in the orthogonal expansion are expressed as an orthogonal func-tion form of the basic random variable.Therefore,this method can use a basic random variable to express the original earthquake ground processes.The orthogonal expansion-random function ap-proach was used to generate 126 representative earthquake samples,and each representative sam-ple was assigned a given probability.The 126 representative earthquake samples were combined with the probability density evolution method of stochastic dynamical systems and random seis-mic responses of large-scale aqueduct structures was investigated.In this study,four cases were considered;aqueduct without water,aqueduct with water in the central trough,aqueduct with wa-ter in a two-side trough,and aqueduct with water in three troughs,and probability information of seismic responses for these cases were obtained.Moreover,using the proposed method,the seis-mic reliability of the aqueduct structures was efficiently calculated.This method provides a new and effective means for precise seismic analysis of large-scale aqueduct structures.%应用随机过程的正交展开方法,将地震动加速度过程展开为标准正交基函数与标准正交随机变量的线性组合形式。在此基础上采用随机函数的思想,将正交展开式中的标准正交随机变量表达为基本随机变量的函数形式,从而实现用一个基本随机变量来表达原地震动过程的目的。结合地震动过程的正交展开-随机函数模型与概率密度演化方法,对某大型渡槽结构进行随机地震反应分析与抗震可靠度计算;重点研究空槽和三槽有水等四种工况下渡槽结构

  9. 基于势概率假设密度滤波的检测前跟踪新算法%Track-Before-Detect algorithm based on cardinalized probability hypothesis density filter

    Institute of Scientific and Technical Information of China (English)

    林再平; 周一宇; 安玮

    2013-01-01

    基于势概率假设密度滤波(Cardinalized Probability Hypothesis Density,CPHD)检测前跟踪(Track before detect,TBD)算法能有效解决未知目标数的弱小目标检测跟踪.文章深入研究了CPHD算法,从标准CPHD滤波的粒子权重更新出发,结合检测前跟踪的实际,合理地推导出CPHD-TBD算法的粒子权重更新表达式;分析了CPHD滤波目标势分布的物理意义,实现了目标势分布更新计算在检测前跟踪的应用.将CPHD滤波和TBD进行有效结合,提出了基于势概率假设密度滤波的检测前跟踪算法,并给出其详细实现步骤.仿真实验证明提出的CPHD-TBD算法与现有概率假设密度检测前跟踪(PHD-TBD)算法相比,能更详细地传递目标分布信息,从本质上改变了PHD-TBD对目标数估计的方式,能更准确稳定估计目标数,实现了对目标的发现和状态准确估计,性能明显更优.

  10. Charmed meson production by e/sup +/e- annihilation. [Branching ratios, angular distributions

    Energy Technology Data Exchange (ETDEWEB)

    Wiss, J.E.

    1977-08-01

    Compelling evidence is presented for the production of the lying (D/sup 0/, D/sup +/) isodoublet of charmed mesons by e/sup +/e/sup -/ annihilation. A study of the recoil mass spectra against these mesons reveals the presence of more massive charmed states, the D*/sup 0/ and D*/sup +/, produced in association with the D isodoublet. Mass values and upper limits on the width of the D and D* are established, and the branching fractions for several D* decay modes are obtained. An analysis of the production and decay angular distributions shows that the D is probably a pseudoscalar state and the D* is probably a vector. Finally, upper limits are obtained for D/sup 0/-antiD/sup 0/ mixing.

  11. Predicting the neutralino relic density in the MSSM more precisely

    CERN Document Server

    Harz, Julia; Klasen, Michael; Kovařík, Karol; Steppeler, Patrick

    2016-01-01

    The dark matter relic density being a powerful observable to constrain models of new physics, the recent experimental progress calls for more precise theoretical predictions. On the particle physics side, improvements are to be made in the calculation of the (co)annihilation cross-section of the dark matter particle. We present the project DM@NLO which aims at calculating the neutralino (co)annihilation cross-section in the MSSM including radiative corrections in QCD. In the present document, we briefly review selected results for different (co)annihilation processes. We then discuss the estimation of the associated theory uncertainty obtained by varying the renormalization scale. Finally, perspectives are discussed.

  12. Dark Matter annihilation in Draco: new considerations of the expected gamma flux

    CERN Document Server

    Sanchez-Conde, M A; Lokas, E L; Gómez, M E; Wojtak, R; Moles, M

    2007-01-01

    A new revision of the gamma flux that we expect to detect in Imaging Atmospheric Cherenkov Telescopes (IACTs) from SUSY dark matter annihilation in the Draco dSph is presented using the dark matter density profiles compatible with the latest observations. This revision takes also into account the important effect of the Point Spread Function (PSF) of the telescope. We show that this effect is crucial in the way we will observe and interpret a possible signal detection. In particular, it could be impossible to discriminate between a cuspy and a cored dark matter density profile due to the fact that both density profiles may yield very similar flux profile observed by the telescope. Finally, we discuss the prospects to detect a possible gamma signal from Draco for current or planned experiments, i.e. MAGIC, GLAST and GAW.

  13. Search for Dark Matter Annihilation Signals from the Fornax Galaxy Cluster with H.E.S.S.

    Science.gov (United States)

    Abramowski, A.; Acero, F.; Aharonian, F.; Akhperjanian, A. G.; Anton, G.; Balzer, A.; Barnacka, A.; Barres de Almeida, U.; Becherini, Y.; Becker, J.; Behera, B.; Bernlöhr, K.; Birsin, E.; Biteau, J.; Bochow, A.; Boisson, C.; Bolmont, J.; Bordas, P.; Brucker, J.; Brun, F.; Brun, P.; Bulik, T.; Büsching, I.; Carrigan, S.; Casanova, S.; Cerruti, M.; Chadwick, P. M.; Charbonnier, A.; Chaves, R. C. G.; Cheesebrough, A.; Clapson, A. C.; Coignet, G.; Cologna, G.; Conrad, J.; Dalton, M.; Daniel, M. K.; Davids, I. D.; Degrange, B.; Deil, C.; Dickinson, H. J.; Djannati-Ataï, A.; Domainko, W.; Drury, L. O'C.; Dubus, G.; Dutson, K.; Dyks, J.; Dyrda, M.; Egberts, K.; Eger, P.; Espigat, P.; Fallon, L.; Farnier, C.; Fegan, S.; Feinstein, F.; Fernandes, M. V.; Fiasson, A.; Fontaine, G.; Förster, A.; Füßling, M.; Gallant, Y. A.; Gast, H.; Gérard, L.; Gerbig, D.; Giebels, B.; Glicenstein, J. F.; Glück, B.; Goret, P.; Göring, D.; Häffner, S.; Hague, J. D.; Hampf, D.; Hauser, M.; Heinz, S.; Heinzelmann, G.; Henri, G.; Hermann, G.; Hinton, J. A.; Hoffmann, A.; Hofmann, W.; Hofverberg, P.; Holler, M.; Horns, D.; Jacholkowska, A.; de Jager, O. C.; Jahn, C.; Jamrozy, M.; Jung, I.; Kastendieck, M. A.; Katarzyński, K.; Katz, U.; Kaufmann, S.; Keogh, D.; Khangulyan, D.; Khélifi, B.; Klochkov, D.; Kluźniak, W.; Kneiske, T.; Komin, Nu.; Kosack, K.; Kossakowski, R.; Laffon, H.; Lamanna, G.; Lennarz, D.; Lohse, T.; Lopatin, A.; Lu, C.-C.; Marandon, V.; Marcowith, A.; Masbou, J.; Maurin, D.; Maxted, N.; Mayer, M.; McComb, T. J. L.; Medina, M. C.; Méhault, J.; Moderski, R.; Moulin, E.; Naumann, C. L.; Naumann-Godo, M.; de Naurois, M.; Nedbal, D.; Nekrassov, D.; Nguyen, N.; Nicholas, B.; Niemiec, J.; Nolan, S. J.; Ohm, S.; de Oña Wilhelmi, E.; Opitz, B.; Ostrowski, M.; Oya, I.; Panter, M.; Paz Arribas, M.; Pedaletti, G.; Pelletier, G.; Petrucci, P.-O.; Pita, S.; Pühlhofer, G.; Punch, M.; Quirrenbach, A.; Raue, M.; Rayner, S. M.; Reimer, A.; Reimer, O.; Renaud, M.; de los Reyes, R.; Rieger, F.; Ripken, J.; Rob, L.; Rosier-Lees, S.; Rowell, G.; Rudak, B.; Rulten, C. B.; Ruppel, J.; Sahakian, V.; Sanchez, D. A.; Santangelo, A.; Schlickeiser, R.; Schöck, F. M.; Schulz, A.; Schwanke, U.; Schwarzburg, S.; Schwemmer, S.; Sheidaei, F.; Skilton, J. L.; Sol, H.; Spengler, G.; Stawarz, Ł.; Steenkamp, R.; Stegmann, C.; Stinzing, F.; Stycz, K.; Sushch, I.; Szostek, A.; Tavernet, J.-P.; Terrier, R.; Tluczykont, M.; Valerius, K.; van Eldik, C.; Vasileiadis, G.; Venter, C.; Vialle, J. P.; Viana, A.; Vincent, P.; Völk, H. J.; Volpe, F.; Vorobiov, S.; Vorster, M.; Wagner, S. J.; Ward, M.; White, R.; Wierzcholska, A.; Zacharias, M.; Zajczyk, A.; Zdziarski, A. A.; Zech, A.; Zechlin, H.-S.; H. E. S. S. Collaboration

    2012-05-01

    The Fornax galaxy cluster was observed with the High Energy Stereoscopic System for a total live time of 14.5 hr, searching for very high energy (VHE; E > 100GeV) γ-rays from dark matter (DM) annihilation. No significant signal was found in searches for point-like and extended emissions. Using several models of the DM density distribution, upper limits on the DM velocity-weighted annihilation cross-section langσvrang as a function of the DM particle mass are derived. Constraints are derived for different DM particle models, such as those arising from Kaluza-Klein and supersymmetric models. Various annihilation final states are considered. Possible enhancements of the DM annihilation γ-ray flux, due to DM substructures of the DM host halo, or from the Sommerfeld effect, are studied. Additional γ-ray contributions from internal bremsstrahlung and inverse Compton radiation are also discussed. For a DM particle mass of 1 TeV, the exclusion limits at 95% of confidence level reach values of langσvrang95% C.L. ~ 10-23 cm3 s-1, depending on the DM particle model and halo properties. Additional contribution from DM substructures can improve the upper limits on langσvrang by more than two orders of magnitude. At masses around 4.5 TeV, the enhancement by substructures and the Sommerfeld resonance effect results in a velocity-weighted annihilation cross-section upper limit at the level of langσvrang95% C.L. ~10-26 cm3 s-1.

  14. Evaluating probability forecasts

    CERN Document Server

    Lai, Tze Leung; Shen, David Bo; 10.1214/11-AOS902

    2012-01-01

    Probability forecasts of events are routinely used in climate predictions, in forecasting default probabilities on bank loans or in estimating the probability of a patient's positive response to treatment. Scoring rules have long been used to assess the efficacy of the forecast probabilities after observing the occurrence, or nonoccurrence, of the predicted events. We develop herein a statistical theory for scoring rules and propose an alternative approach to the evaluation of probability forecasts. This approach uses loss functions relating the predicted to the actual probabilities of the events and applies martingale theory to exploit the temporal structure between the forecast and the subsequent occurrence or nonoccurrence of the event.

  15. Elements of probability theory

    CERN Document Server

    Rumshiskii, L Z

    1965-01-01

    Elements of Probability Theory presents the methods of the theory of probability. This book is divided into seven chapters that discuss the general rule for the multiplication of probabilities, the fundamental properties of the subject matter, and the classical definition of probability. The introductory chapters deal with the functions of random variables; continuous random variables; numerical characteristics of probability distributions; center of the probability distribution of a random variable; definition of the law of large numbers; stability of the sample mean and the method of moments

  16. Partial wave analyses of antiproton-proton annihilations in flight

    Energy Technology Data Exchange (ETDEWEB)

    Pychy, Julian; Koch, Helmut; Kopf, Bertram; Wiedner, Ulrich [Institut fuer Experimentalphysik I, Ruhr-Universitaet Bochum (Germany)

    2015-07-01

    To investigate important aspects for the upcoming PANDA experiment, partial wave analyses (PWA) of anti pp-annihilation processes are carried out using data from the Crystal Barrel (LEAR) experiment. A coupled channel analysis of the three reactions resulting in the final states K{sup +}K{sup -}π{sup 0}, π{sup 0}π{sup 0}η and π{sup 0}ηη at a beam momentum of 900 MeV/c is currently in progress. Preliminary results on the determination of resonance contributions and of the spin density matrix (SDM) of different light mesons are presented. The elements of the SDM provide important information about the production process. Furthermore, results of the analyses of the channels ωπ{sup 0}, ωπ{sup 0}η and π{sup +}π{sup -}π{sup 0}π{sup 0} are discussed. These studies are focused on the determination of the contributing angular momenta of the anti pp-system as well as of the SDM of the ω meson. Significant spin-alignment effects depending on the production angle are visible here. These results are compared with those for the φ(1020) in the K{sup +}K{sup -}π{sup 0} channel. All analyses have been performed using PAWIAN, a common, object-oriented and easy-to-use PWA software that is being developed at the Ruhr-Universitaet Bochum. This presentation summarizes recent activities of the Crystal Barrel (LEAR) Collaboration.

  17. CMB Constraints On The Thermal WIMP Annihilation Cross Section

    CERN Document Server

    Steigman, Gary

    2015-01-01

    A thermal relic, often referred to as a weakly interacting massive particle (WIMP),is a particle produced during the early evolution of the Universe whose relic abundance (e.g., at present) depends only on its mass and its thermally averaged annihilation cross section (annihilation rate factor) sigma*v_ann. Late time WIMP annihilation has the potential to affect the cosmic microwave background (CMB) power spectrum. Current observational constraints on the absence of such effects provide bounds on the mass and the annihilation cross section of relic particles that may, but need not be dark matter candidates. For a WIMP that is a dark matter candidate, the CMB constraint sets an upper bound to the annihilation cross section, leading to a lower bound to their mass that depends on whether or not the WIMP is its own antiparticle. For a self-conjugate WIMP, m_min = 50f GeV, where f is an electromagnetic energy efficiency factor. For a non self-conjugate WIMP, the minimum mass is a factor of two larger. For a WIMP t...

  18. The Isotropic Radio Background and Annihilating Dark Matter

    Energy Technology Data Exchange (ETDEWEB)

    Hooper, Dan [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Belikov, Alexander V. [Institut d' Astrophysique (France); Jeltema, Tesla E. [Univ. of California, Santa Cruz, CA (United States); Linden, Tim [Univ. of California, Santa Cruz, CA (United States); Profumo, Stefano [Univ. of California, Santa Cruz, CA (United States); Slatyer, Tracy R. [Princeton Univ., Princeton, NJ (United States)

    2012-11-01

    Observations by ARCADE-2 and other telescopes sensitive to low frequency radiation have revealed the presence of an isotropic radio background with a hard spectral index. The intensity of this observed background is found to exceed the flux predicted from astrophysical sources by a factor of approximately 5-6. In this article, we consider the possibility that annihilating dark matter particles provide the primary contribution to the observed isotropic radio background through the emission of synchrotron radiation from electron and positron annihilation products. For reasonable estimates of the magnetic fields present in clusters and galaxies, we find that dark matter could potentially account for the observed radio excess, but only if it annihilates mostly to electrons and/or muons, and only if it possesses a mass in the range of approximately 5-50 GeV. For such models, the annihilation cross section required to normalize the synchrotron signal to the observed excess is sigma v ~ (0.4-30) x 10^-26 cm^3/s, similar to the value predicted for a simple thermal relic (sigma v ~ 3 x 10^-26 cm^3/s). We find that in any scenario in which dark matter annihilations are responsible for the observed excess radio emission, a significant fraction of the isotropic gamma ray background observed by Fermi must result from dark matter as well.

  19. Contributions to cosmic reionization from dark matter annihilation and decay

    Science.gov (United States)

    Liu, Hongwan; Slatyer, Tracy R.; Zavala, Jesús

    2016-09-01

    Dark matter annihilation or decay could have a significant impact on the ionization and thermal history of the universe. In this paper, we study the potential contribution of dark matter annihilation (s -wave- or p -wave-dominated) or decay to cosmic reionization, via the production of electrons, positrons and photons. We map out the possible perturbations to the ionization and thermal histories of the universe due to dark matter processes, over a broad range of velocity-averaged annihilation cross sections/decay lifetimes and dark matter masses. We have employed recent numerical studies of the efficiency with which annihilation/decay products induce heating and ionization in the intergalactic medium, and in this work extended them down to a redshift of 1 +z =4 for two different reionization scenarios. We also improve on earlier studies by using the results of detailed structure formation models of dark matter haloes and subhaloes that are consistent with up-to-date N -body simulations, with estimates on the uncertainties that originate from the smallest scales. We find that for dark matter models that are consistent with experimental constraints, a contribution of more than 10% to the ionization fraction at reionization is disallowed for all annihilation scenarios. Such a contribution is possible only for decays into electron/positron pairs, for light dark matter with mass mχ≲100 MeV , and a decay lifetime τχ˜1 024- 1 025 s .

  20. The Effects of Dark Matter Annihilation on Cosmic Reionization

    Energy Technology Data Exchange (ETDEWEB)

    Kaurov, Alexander A. [Chicago U., Astron. Astrophys. Ctr.; Hooper, Dan [Chicago U., EFI; Gnedin, Nickolay Y. [Chicago U., KICP

    2015-12-01

    We revisit the possibility of constraining the properties of dark matter (DM) by studying the epoch of cosmic reionization. Previous studies have shown that DM annihilation was unlikely to have provided a large fraction of the photons that ionized the universe, but instead played a subdominant role relative to stars and quasars. The DM, however, begins to efficiently annihilate with the formation of primordial microhalos at $z\\sim100-200$, much earlier than the formation of the first stars. Therefore, if DM annihilation ionized the universe at even the percent level over the interval $z \\sim 20-100$, it can leave a significant imprint on the global optical depth, $\\tau$. Moreover, we show that cosmic microwave background (CMB) polarization data and future 21 cm measurements will enable us to more directly probe the DM contribution to the optical depth. In order to compute the annihilation rate throughout the epoch of reionization, we adopt the latest results from structure formation studies and explore the impact of various free parameters on our results. We show that future measurements could make it possible to place constraints on the dark matter's annihilation cross section that are at a level comparable to those obtained from the observations of dwarf galaxies, cosmic ray measurements, and studies of recombination.

  1. Memory annihilation of structured maps in bidirectional associative memories.

    Science.gov (United States)

    Kumar, S

    2000-01-01

    Structured sets comprise Boolean vectors with equal pair-wise Hamming distances, h. An external vector, if it exists at an equidistance of h/2 from each vector of the structured set, is called the centroid of the set. A structured map is a one-one onto mapping between structured sets. It is a set of associations between Boolean vectors, where both domain and range vectors are drawn from structured sets. Associations between centroids are called centroidal associations. In this paper we show that when structured maps are encoded into bidirectional associative memories using outer-product correlation encoding, the memory of these associations annihilates under certain mild conditions. When annihilation occurs, the centroidal association emerges as a stable association, and we call it an alien attractor. For the special case of maps where h=2, self-annihilation can take place when either the domain or range dimensions are greater than five. In fact, we show that for dimensions greater than eight, as few as three associations suffice for self-annihilation. As an example shows, annihilation occurs even for the case of bipolar decoding which is well known for its improved error correction capability in such associative memory models.

  2. Photon from the annihilation process with CGC in the $pA$ collision

    CERN Document Server

    Benic, Sanjin

    2016-01-01

    We discuss the photon production in the $pA$ collision in a framework of the color glass condensate (CGC). We work in a regime where the color density $\\rho_A$ of the nucleus is large enough to justify the CGC treatment, while soft gluons in the proton are dominant over quarks but do not yet belong to the CGC regime. In this semi-CGC regime for the proton, we can still perform a systematic expansion in powers of the color density $\\rho_p$ of the proton. The leading-order contributions to the photon production appear from the Bremsstrahlung and the annihilation processes involving quarks from a gluon sourced by $\\rho_p$. We analytically derive an expression for the annihilation contribution to the photon production rate and numerically find that a thermal exponential form gives the best fit with an effective temperature $\\sim 0.5Q_s$ where $Q_s$ is the saturation momentum of the nucleus.

  3. Probability on real Lie algebras

    CERN Document Server

    Franz, Uwe

    2016-01-01

    This monograph is a progressive introduction to non-commutativity in probability theory, summarizing and synthesizing recent results about classical and quantum stochastic processes on Lie algebras. In the early chapters, focus is placed on concrete examples of the links between algebraic relations and the moments of probability distributions. The subsequent chapters are more advanced and deal with Wigner densities for non-commutative couples of random variables, non-commutative stochastic processes with independent increments (quantum Lévy processes), and the quantum Malliavin calculus. This book will appeal to advanced undergraduate and graduate students interested in the relations between algebra, probability, and quantum theory. It also addresses a more advanced audience by covering other topics related to non-commutativity in stochastic calculus, Lévy processes, and the Malliavin calculus.

  4. Introduction to probability

    CERN Document Server

    Roussas, George G

    2006-01-01

    Roussas's Introduction to Probability features exceptionally clear explanations of the mathematics of probability theory and explores its diverse applications through numerous interesting and motivational examples. It provides a thorough introduction to the subject for professionals and advanced students taking their first course in probability. The content is based on the introductory chapters of Roussas's book, An Intoduction to Probability and Statistical Inference, with additional chapters and revisions. Written by a well-respected author known for great exposition an

  5. Non-Archimedean Probability

    NARCIS (Netherlands)

    Benci, Vieri; Horsten, Leon; Wenmackers, Sylvia

    2013-01-01

    We propose an alternative approach to probability theory closely related to the framework of numerosity theory: non-Archimedean probability (NAP). In our approach, unlike in classical probability theory, all subsets of an infinite sample space are measurable and only the empty set gets assigned prob

  6. Interpretations of probability

    CERN Document Server

    Khrennikov, Andrei

    2009-01-01

    This is the first fundamental book devoted to non-Kolmogorov probability models. It provides a mathematical theory of negative probabilities, with numerous applications to quantum physics, information theory, complexity, biology and psychology. The book also presents an interesting model of cognitive information reality with flows of information probabilities, describing the process of thinking, social, and psychological phenomena.

  7. Dependent Probability Spaces

    Science.gov (United States)

    Edwards, William F.; Shiflett, Ray C.; Shultz, Harris

    2008-01-01

    The mathematical model used to describe independence between two events in probability has a non-intuitive consequence called dependent spaces. The paper begins with a very brief history of the development of probability, then defines dependent spaces, and reviews what is known about finite spaces with uniform probability. The study of finite…

  8. Laboratory-Tutorial activities for teaching probability

    CERN Document Server

    Wittmann, M C; Morgan, J T; Feeley, Roger E.; Morgan, Jeffrey T.; Wittmann, Michael C.

    2006-01-01

    We report on the development of students' ideas of probability and probability density in a University of Maine laboratory-based general education physics course called Intuitive Quantum Physics. Students in the course are generally math phobic with unfavorable expectations about the nature of physics and their ability to do it. We describe a set of activities used to teach concepts of probability and probability density. Rudimentary knowledge of mechanics is needed for one activity, but otherwise the material requires no additional preparation. Extensions of the activities include relating probability density to potential energy graphs for certain "touchstone" examples. Students have difficulties learning the target concepts, such as comparing the ratio of time in a region to total time in all regions. Instead, they often focus on edge effects, pattern match to previously studied situations, reason about necessary but incomplete macroscopic elements of the system, use the gambler's fallacy, and use expectati...

  9. Models of pair annihilation in 1E 1740.7-2942 and the HEAO 1 A-4 annihilation source

    Science.gov (United States)

    Maciolek-Niedzwiecki, Andrzej; Zdziarski, Andrzej

    1994-01-01

    We study possible models of two Galactic sources of transient pair annihilation radiation, 1E 1740.7-2942 and a source observed by High Energy Astronomy Observatory (HEAO) 1 A-4. We fit the observed spectral features by thermal annihilation spectra and find that the redshifts obtained by us are much larger than those obtained from fitting Caussian lines centered on 511 keV. This effect, which is due to the net blueshift (with respect to 511 keV) of the annihilation spectrum due to the thermal energies of pairs, puts strong constraints on models of sources. We consider those constraints first without considering the mechanism of positron production. From the shape of the observed spectra, we are able to rule out both spherical clouds and layers above cold matter as possible source geometries. The observed spectra are compatible with two source geometries: (1) a nearly face-on disk in the Kerr metric and (2) a jet close to a black hole. We consider, then, the origin of the pairs. Theories of both thermal and nonthermal pair equilibria predict that photon-pair production is unable to produce annihilation features that contain as much as half of the bolometric luminosity, which is observed. A possible solution to this problem is obscuration of a nonthermal source (in which pairs are produced by photon-photon collisions) and an outflow of pairs to an unobscured region. This makes annihilation in a jet the most likely model of the considered sources.

  10. A critical reevaluation of radio constraints on annihilating dark matter

    Energy Technology Data Exchange (ETDEWEB)

    Cholis, Ilias; Hooper, Dan; Linden, Tim

    2015-04-01

    A number of groups have employed radio observations of the Galactic center to derive stringent constraints on the annihilation cross section of weakly interacting dark matter. In this paper, we show that electron energy losses in this region are likely to be dominated by inverse Compton scattering on the interstellar radiation field, rather than by synchrotron, considerably relaxing the constraints on the dark matter annihilation cross section compared to previous works. Strong convective winds, which are well motivated by recent observations, may also significantly weaken synchrotron constraints. After taking these factors into account, we find that radio constraints on annihilating dark matter are orders of magnitude less stringent than previously reported, and are generally weaker than those derived from current gamma-ray observations.

  11. Consequences of dark matter self-annihilation for galaxy formation

    CERN Document Server

    Natarajan, Priyamvada; Bertone, Gianfranco

    2007-01-01

    Galaxy formation requires a process that continually heats gas and quenches star formation in order to reproduce the observed shape of the luminosity function of bright galaxies. To accomplish this, current models invoke heating from supernovae, and energy injection from active galactic nuclei. However, observations of radio-loud active galactic nuclei suggest that their feedback is likely to not be as efficient as required, signaling the need for additional heating processes. We propose the self-annihilation of weakly interacting massive particles that constitute dark matter as a steady source of heating. In this paper, we explore the circumstances under which this process may provide the required energy input. To do so, dark matter annihilations are incorporated into a galaxy formation model within the Millennium cosmological simulation. Energy input from self-annihilation can compensate for all the required gas cooling and reproduce the observed galaxy luminosity function only for what appear to be extreme...

  12. Generalized creation and annihilation operators via complex nonlinear Riccati equations

    Science.gov (United States)

    Schuch, Dieter; Castaños, Octavio; Rosas-Ortiz, Oscar

    2013-06-01

    Based on Gaussian wave packet solutions of the time-dependent Schrödinger equation, a generalization of the conventional creation and annihilation operators and the corresponding coherent states can be obtained. This generalization includes systems where also the width of the coherent states is time-dependent as they occur for harmonic oscillators with time-dependent frequency or systems in contact with a dissipative environment. The key point is the replacement of the frequency ω0 that occurs in the usual definition of the creation/annihilation operator by a complex time-dependent function that fulfils a nonlinear Riccati equation. This equation and its solutions depend on the system under consideration and on the (complex) initial conditions. Formal similarities also exist with supersymmetric quantum mechanics. The generalized creation and annihilation operators also allow to construct exact analytic solutions of the free motion Schrödinger equation in terms of Hermite polynomials with time-dependent variable.

  13. Dark matter annihilation with s-channel internal Higgsstrahlung

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Jason; Liao, Jiajun, E-mail: liaoj@hawaii.edu; Marfatia, Danny

    2016-08-10

    We study the scenario of fermionic dark matter that annihilates to standard model fermions through an s-channel axial vector mediator. We point out that the well-known chirality suppression of the annihilation cross section can be alleviated by s-channel internal Higgsstrahlung. The shapes of the cosmic ray spectra are identical to that of t-channel internal Higgsstrahlung in the limit of a heavy mediating particle. Unlike the general case of t-channel bremsstrahlung, s-channel Higgsstrahlung can be the dominant annihilation process even for Dirac dark matter. Since the s-channel mediator can be a standard model singlet, collider searches for the mediator are easily circumvented.

  14. Notes on symmetric and exterior depth and annihilator numbers

    Directory of Open Access Journals (Sweden)

    Gesa Kampf

    2008-11-01

    Full Text Available We survey and compare invariants of modules over the polynomial ring and the exterior algebra. In our considerations, we focus on the depth. The exterior analogue of depth was first introduced by Aramova, Avramov and Herzog. We state similarities between the two notion of depth and exhibit their relation in the case of squarefree modules. Work of Conca, Herzog and Hibi and Trung, respectively, shows that annihilator numbers are a meaningful generalization of depth over the polynomial ring. We introduce and study annihilator numbers over the exterior algebra. Despite some minor differences in the definition, those invariants show common behavior. In both situations a positive linear combination of the annihilator numbers can be used to bound the symmetric and exterior graded Betti numbers, respectively, from above.

  15. A Critical Reevaluation of Radio Constraints on Annihilating Dark Matter

    Energy Technology Data Exchange (ETDEWEB)

    Cholis, Ilias [Fermilab; Hooper, Dan [Fermilab; Linden, Tim [Chicago U., KICP

    2015-04-03

    A number of groups have employed radio observations of the Galactic center to derive stringent constraints on the annihilation cross section of weakly interacting dark matter. In this paper, we show that electron energy losses in this region are likely to be dominated by inverse Compton scattering on the interstellar radiation field, rather than by synchrotron, considerably relaxing the constraints on the dark matter annihilation cross section compared to previous works. Strong convective winds, which are well motivated by recent observations, may also significantly weaken synchrotron constraints. After taking these factors into account, we find that radio constraints on annihilating dark matter are orders of magnitude less stringent than previously reported, and are generally weaker than those derived from current gamma-ray observations.

  16. Molecular model for annihilation rates in positron complexes

    Energy Technology Data Exchange (ETDEWEB)

    Assafrao, Denise [Laboratorio de Atomos e Moleculas Especiais, Departamento de Fisica, ICEx, Universidade Federal de Minas Gerais, P.O. Box 702, 30123-970 Belo Horizonte, MG (Brazil); Department of Applied Mathematics and Theoretical Physics, Queen' s University of Belfast, Belfast BT7 1NN, Northern Ireland (United Kingdom); Walters, H.R. James [Department of Applied Mathematics and Theoretical Physics, Queen' s University of Belfast, Belfast BT7 1NN, Northern Ireland (United Kingdom); Mohallem, Jose R. [Laboratorio de Atomos e Moleculas Especiais, Departamento de Fisica, ICEx, Universidade Federal de Minas Gerais, P.O. Box 702, 30123-970 Belo Horizonte, MG (Brazil); Department of Applied Mathematics and Theoretical Physics, Queen' s University of Belfast, Belfast BT7 1NN, Northern Ireland (United Kingdom)], E-mail: rachid@fisica.ufmg.br

    2008-02-15

    The molecular approach for positron interaction with atoms is developed further. Potential energy curves for positron motion are obtained. Two procedures accounting for the nonadiabatic effective positron mass are introduced for calculating annihilation rate constants. The first one takes the bound-state energy eigenvalue as an input parameter. The second is a self-contained and self-consistent procedure. The methods are tested with quite different states of the small complexes HPs, e{sup +}He (electronic triplet) and e{sup +}Be (electronic singlet and triplet). For states yielding the positronium cluster, the annihilation rates are quite stable, irrespective of the accuracy in binding energies. For the e{sup +}Be states, annihilation rates are larger and more consistent with qualitative predictions than previously reported ones.

  17. Positron Annihilation Induced Auger Electron Spectroscopy of Inner Shell Transitions Using Time-Of Technique

    Science.gov (United States)

    Xie, Shuping; Jiang, Neng; Weiss, A. H.

    2003-03-01

    Positron annihilation induced Auger electron spectroscopy (PAES) has been shown to have unique advantages over conventional electron collision induced Auger techniques, including the ability to eliminate the secondary electron background and selectively probe the top-most atomic layer on the sample surface. Here we report on the development of a new time-of-flight (TOF) spectrometer which combines features high efficiency magnetic transport and parrallel energy measurment with high resolution by using an innovative timing method. The new TOF-PAES system, was used to make the first quantitative comparative measurements of the Auger intensities associated with the annihilation of positrons with the deep core levels (1s) of S KLL (180eV), C KLL (270eV), N KLL (360eV), and O KLL (510eV). Experimental results of Auger probabilities at outer core level (3s, 3P) of Cu M2,3VV (60eV), M1VV (105eV) are compared with the theoretical value of Jensen and Weiss. Quantitatively study the surface adsorbate process on Cu is performed and concentration changes of surface components are obtained. These results demonstrate that TOF-PAES can be used to obtain quantitative,top-layer specific, information from chemically important elements including those with relatively deep core levels (e.g. C and O).

  18. Philosophy and probability

    CERN Document Server

    Childers, Timothy

    2013-01-01

    Probability is increasingly important for our understanding of the world. What is probability? How do we model it, and how do we use it? Timothy Childers presents a lively introduction to the foundations of probability and to philosophical issues it raises. He keeps technicalities to a minimum, and assumes no prior knowledge of the subject. He explains the main interpretations of probability-frequentist, propensity, classical, Bayesian, and objective Bayesian-and uses stimulatingexamples to bring the subject to life. All students of philosophy will benefit from an understanding of probability,

  19. Dynamical Simulation of Probabilities

    Science.gov (United States)

    Zak, Michail

    1996-01-01

    It has been demonstrated that classical probabilities, and in particular, probabilistic Turing machine, can be simulated by combining chaos and non-Lipschitz dynamics, without utilization of any man-made devices(such as random number generators). Self-orgainizing properties of systems coupling simulated and calculated probabilities and their link to quantum computations are discussed. Special attention was focused upon coupled stochastic processes, defined in terms of conditional probabilities, for which joint probability does not exist. Simulations of quantum probabilities are also discussed.

  20. Sensitivity of HAWC to high-mass dark matter annihilations

    Science.gov (United States)

    Abeysekara, A. U.; Alfaro, R.; Alvarez, C.; Álvarez, J. D.; Arceo, R.; Arteaga-Velázquez, J. C.; Ayala Solares, H. A.; Barber, A. S.; Baughman, B. M.; Bautista-Elivar, N.; Becerra Gonzalez, J.; Belmont, E.; BenZvi, S. Y.; Berley, D.; Bonilla Rosales, M.; Braun, J.; Caballero-Lopez, R. A.; Caballero-Mora, K. S.; Carramiñana, A.; Castillo, M.; Cotti, U.; Cotzomi, J.; de la Fuente, E.; De León, C.; DeYoung, T.; Diaz Hernandez, R.; Diaz-Cruz, L.; Díaz-Vélez, J. C.; Dingus, B. L.; DuVernois, M. A.; Ellsworth, R. W.; Fiorino, D. W.; Fraija, N.; Galindo, A.; Garfias, F.; González, M. M.; Goodman, J. A.; Grabski, V.; Gussert, M.; Hampel-Arias, Z.; Harding, J. P.; Hui, C. M.; Hüntemeyer, P.; Imran, A.; Iriarte, A.; Karn, P.; Kieda, D.; Kunde, G. J.; Lara, A.; Lauer, R. J.; Lee, W. H.; Lennarz, D.; León Vargas, H.; Linares, E. C.; Linnemann, J. T.; Longo, M.; Luna-Garcia, R.; Marinelli, A.; Martinez, H.; Martinez, O.; Martínez-Castro, J.; Matthews, J. A. J.; McEnery, J.; Mendoza Torres, E.; Miranda-Romagnoli, P.; Moreno, E.; Mostafá, M.; Nellen, L.; Newbold, M.; Noriega-Papaqui, R.; Oceguera-Becerra, T.; Patricelli, B.; Pelayo, R.; Pérez-Pérez, E. G.; Pretz, J.; Rivière, C.; Rosa-González, D.; Ryan, J.; Salazar, H.; Salesa, F.; Sanchez, F. E.; Sandoval, A.; Schneider, M.; Silich, S.; Sinnis, G.; Smith, A. J.; Sparks Woodle, K.; Springer, R. W.; Taboada, I.; Toale, P. A.; Tollefson, K.; Torres, I.; Ukwatta, T. N.; Villaseñor, L.; Weisgarber, T.; Westerhoff, S.; Wisher, I. G.; Wood, J.; Yodh, G. B.; Younk, P. W.; Zaborov, D.; Zepeda, A.; Zhou, H.; Abazajian, K. N.; Milagro Collaboration

    2014-12-01

    The High Altitude Water Cherenkov (HAWC) observatory is a wide field-of-view detector sensitive to gamma rays of 100 GeV to a few hundred TeV. Located in central Mexico at 19° North latitude and 4100 m above sea level, HAWC will observe gamma rays and cosmic rays with an array of water Cherenkov detectors. The full HAWC array is scheduled to be operational in Spring 2015. In this paper, we study the HAWC sensitivity to the gamma-ray signatures of high-mass (multi-TeV) dark matter annihilation. The HAWC observatory will be sensitive to diverse searches for dark matter annihilation, including annihilation from extended dark matter sources, the diffuse gamma-ray emission from dark matter annihilation, and gamma-ray emission from nonluminous dark matter subhalos. Here we consider the HAWC sensitivity to a subset of these sources, including dwarf galaxies, the M31 galaxy, the Virgo cluster, and the Galactic center. We simulate the HAWC response to gamma rays from these sources in several well-motivated dark matter annihilation channels. If no gamma-ray excess is observed, we show the limits HAWC can place on the dark matter cross section from these sources. In particular, in the case of dark matter annihilation into gauge bosons, HAWC will be able to detect a narrow range of dark matter masses to cross sections below thermal. HAWC should also be sensitive to nonthermal cross sections for masses up to nearly 1000 TeV. The constraints placed by HAWC on the dark matter cross section from known sources should be competitive with current limits in the mass range where HAWC has similar sensitivity. HAWC can additionally explore higher dark matter masses than are currently constrained.

  1. Positron annihilation study of vacancy-type defects in Al single crystal foils with the tweed structures across the surface

    Energy Technology Data Exchange (ETDEWEB)

    Kuznetsov, Pavel, E-mail: kpv@ispms.tsc.ru [National Research Tomsk Polytechnic University, Tomsk, 634050 (Russian Federation); Institute of Strength Physics and Materials Science SB RAS, Tomsk, 634055 (Russian Federation); Cizek, Jacub, E-mail: jcizek@mbox.troja.mff.cuni.cz; Hruska, Petr [Charles University in Prague, Praha, CZ-18000 Czech Republic (Czech Republic); Anwad, Wolfgang [Institut für Strahlenphysik, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, D-01314 Germany (Germany); Bordulev, Yuri; Lider, Andrei; Laptev, Roman [National Research Tomsk Polytechnic University, Tomsk, 634050 (Russian Federation); Mironov, Yuri [Institute of Strength Physics and Materials Science SB RAS, Tomsk, 634055 (Russian Federation)

    2015-10-27

    The vacancy-type defects in the aluminum single crystal foils after a series of the cyclic tensions were studied using positron annihilation. Two components were identified in the positron lifetime spectra associated with the annihilation of free positrons and positrons trapped by dislocations. With increasing number of cycles the dislocation density firstly increases and reaches a maximum value at N = 10 000 cycles but then it gradually decreases and at N = 70 000 cycles falls down to the level typical for the virgin samples. The direct evidence on the formation of a two-phase system “defective near-surface layer/base Al crystal” in aluminum foils at cyclic tension was obtained using a positron beam with the variable energy.

  2. Possible complex annihilation and B -> K pi direct CP asymmetry

    CERN Document Server

    Chay, Junegone; Mishima, Satoshi

    2007-01-01

    We point out that a sizable strong phase could be generated from the penguin annihilation in the soft-collinear effective theory for B meson decays. Keeping a small scale suppressed by O(Lambda/m_b), Lambda being a hadronic scale and m_b the b quark mass, in the denominators of internal particle propagators without expansion, the resultant strong phase can accommodate the data of the B^0 -> K^+- pi^+- direct CP asymmetry. Our study reconciles the opposite conclusions on the real or complex penguin annihilation amplitude drawn in the soft-collinear effective theory and in the perturbative QCD approach based on k_T factorization theorem.

  3. Significant enhancement of neutralino dark matter annihilation from electroweak bremsstrahlung.

    Science.gov (United States)

    Bringmann, Torsten; Calore, Francesca

    2014-02-21

    Indirect searches for the cosmological dark matter have become ever more competitive during the past years. Here, we report the first full calculation of leading electroweak corrections to the annihilation rate of supersymmetric neutralino dark matter. We find that these corrections can be huge, partially due to contributions that have been overlooked so far. Our results imply a significantly enhanced discovery potential of this well motivated dark matter candidate with current and upcoming cosmic ray experiments, in particular for gamma rays and models with somewhat small annihilation rates at the tree level.

  4. Heavy dark matter annihilation from effective field theory.

    Science.gov (United States)

    Ovanesyan, Grigory; Slatyer, Tracy R; Stewart, Iain W

    2015-05-29

    We formulate an effective field theory description for SU(2)_{L} triplet fermionic dark matter by combining nonrelativistic dark matter with gauge bosons in the soft-collinear effective theory. For a given dark matter mass, the annihilation cross section to line photons is obtained with 5% precision by simultaneously including Sommerfeld enhancement and the resummation of electroweak Sudakov logarithms at next-to-leading logarithmic order. Using these results, we present more accurate and precise predictions for the gamma-ray line signal from annihilation, updating both existing constraints and the reach of future experiments.

  5. AMS-02 antiprotons from annihilating or decaying dark matter

    Directory of Open Access Journals (Sweden)

    Koichi Hamaguchi

    2015-07-01

    Full Text Available Recently the AMS-02 experiment reported an excess of cosmic ray antiprotons over the expected astrophysical background. We interpret the excess as a signal from annihilating or decaying dark matter and find that the observed spectrum is well fitted by adding contributions from the annihilation or decay of dark matter with mass of O(TeV or larger. Interestingly, Wino dark matter with mass of around 3 TeV, whose thermal relic abundance is consistent with present dark matter abundance, can explain the antiproton excess. We also discuss the implications for the decaying gravitino dark matter with R-parity violation.

  6. Annihilation physics of exotic galactic dark matter particles

    Science.gov (United States)

    Stecker, F. W.

    1990-01-01

    Various theoretical arguments make exotic heavy neutral weakly interacting fermions, particularly those predicted by supersymmetry theory, attractive candidates for making up the large amount of unseen gravitating mass in galactic halos. Such particles can annihilate with each other, producing secondary particles of cosmic-ray energies, among which are antiprotons, positrons, neutrinos, and gamma-rays. Spectra and fluxes of these annihilation products can be calculated, partly by making use of positron electron collider data and quantum chromodynamic models of particle production derived therefrom. These spectra may provide detectable signatures of exotic particle remnants of the big bang.

  7. Remote forcing annihilates barrier layer in southeastern Arabian Sea

    Digital Repository Service at National Institute of Oceanography (India)

    Shenoi, S.S.C.; Shankar, D.; Shetye, S.R.

    thick barrier layer (BL) exists during March{April ow- ing to a surface layer of low-salinity waters advected earlier during December{January from the Bay of Bengal. The BL is almost annihilated by 7 April owing to upwelling. The relic BL that survives... is annihilated later in May by up- welling, and by the in ow of high-salinity waters from the north and by mixing due to stronger winds, which deepen the mixed layer. We present evidence from satellite data and arguments based on existing theories to show...

  8. Measuring electron-positron annihilation radiation from laser plasma interactions

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Hui; Tommasini, R. [Lawrence Livermore National Laboratory, Livermore, California 94551 (United States); Seely, J.; Szabo, C. I.; Feldman, U.; Pereira, N. [Artep Inc., Ellicott City, Maryland 21042 (United States); Gregori, G.; Falk, K.; Mithen, J.; Murphy, C. D. [Clarendon Laboratory, University of Oxford, Oxford OX1 3PU (United Kingdom)

    2012-10-15

    We investigated various diagnostic techniques to measure the 511 keV annihilation radiations. These include step-wedge filters, transmission crystal spectroscopy, single-hit CCD detectors, and streaked scintillating detection. While none of the diagnostics recorded conclusive results, the step-wedge filter that is sensitive to the energy range between 100 keV and 700 keV shows a signal around 500 keV that is clearly departing from a pure Bremsstrahlung spectrum and that we ascribe to annihilation radiation.

  9. On the effective operators for Dark Matter annihilations

    Energy Technology Data Exchange (ETDEWEB)

    Simone, Andrea De; Thamm, Andrea [CERN, Theory Division, CH-1211 Geneva 23 (Switzerland); Monin, Alexander [Institut de Théorie des Phénomènes Physiques, École Polytechnique Fédérale de Lausanne, CH-1015 Lausanne (Switzerland); Urbano, Alfredo, E-mail: andrea.desimone@sissa.it, E-mail: alexander.monin@epfl.ch, E-mail: andrea.thamm@cern.ch, E-mail: alfredo.urbano@sissa.it [SISSA, via Bonomea 265, I-34136 Trieste (Italy)

    2013-02-01

    We consider effective operators describing Dark Matter (DM) interactions with Standard Model fermions. In the non-relativistic limit of the DM field, the operators can be organized according to their mass dimension and their velocity behaviour, i.e. whether they describe s- or p-wave annihilations. The analysis is carried out for self-conjugate DM (real scalar or Majorana fermion). In this case, the helicity suppression at work in the annihilation into fermions is lifted by electroweak bremsstrahlung. We construct and study all dimension-8 operators encoding such an effect. These results are of interest in indirect DM searches.

  10. On the effective operators for Dark Matter annihilations

    CERN Document Server

    De Simone, Andrea; Thamm, Andrea; Urbano, Alfredo

    2013-01-01

    We consider effective operators describing Dark Matter (DM) interactions with Standard Model fermions. In the non-relativistic limit of the DM field, the operators can be organized according to their mass dimension and their velocity behaviour, i.e. whether they describe s- or p-wave annihilations. The analysis is carried out for self-conjugate DM (real scalar or Majorana fermion). In this case, the helicity suppression at work in the annihilation into fermions is lifted by electroweak bremsstrahlung. We construct and study all dimension-8 operators encoding such an effect. These results are of interest in indirect DM searches.

  11. Bremsstrahlung signatures of dark matter annihilation in the Sun

    CERN Document Server

    Fukushima, Keita; Kumar, Jason; Marfatia, Danny

    2012-01-01

    The nonrelativistic annihilation of Majorana dark matter in the Sun to a pair of light fermions is chirality-suppressed. Annihilation to 3-body final states $\\ell^+f^-V$, where $V=W,Z,\\gamma$, and $\\ell$ and $f$ are light fermions (that may be the same), becomes dominant since bremsstrahlung relaxes the chirality suppression. We evaluate the neutrino spectra at the source, including spin and helicity dependent effects, and assess the detectability of each significant bremsstrahlung channel at IceCube/DeepCore. We also show how to combine the sensitivities to the dark matter-nucleon scattering cross section in individual channels, since typically several channels contribute in models.

  12. Recent Developments in Applied Probability and Statistics

    CERN Document Server

    Devroye, Luc; Kohler, Michael; Korn, Ralf

    2010-01-01

    This book presents surveys on recent developments in applied probability and statistics. The contributions include topics such as nonparametric regression and density estimation, option pricing, probabilistic methods for multivariate interpolation, robust graphical modelling and stochastic differential equations. Due to its broad coverage of different topics the book offers an excellent overview of recent developments in applied probability and statistics.

  13. Effect of Voids on Angular Correlation of Positron Annihilation Photons in Molybdenum

    DEFF Research Database (Denmark)

    Mogensen, O. E.; Petersen, K.; Cotterill, R. M. J.

    1972-01-01

    POSITRON annihilation investigations of defects in crystals have shown that for sufficiently high defect concentrations (typically above about 10−6) all positrons become trapped in the defects before annihilation, thus changing the characteristics of the annihilation process. For example, trappin...

  14. Probability and radical behaviorism

    Science.gov (United States)

    Espinosa, James M.

    1992-01-01

    The concept of probability appears to be very important in the radical behaviorism of Skinner. Yet, it seems that this probability has not been accurately defined and is still ambiguous. I give a strict, relative frequency interpretation of probability and its applicability to the data from the science of behavior as supplied by cumulative records. Two examples of stochastic processes are given that may model the data from cumulative records that result under conditions of continuous reinforcement and extinction, respectively. PMID:22478114

  15. Probability and radical behaviorism

    OpenAIRE

    Espinosa, James M.

    1992-01-01

    The concept of probability appears to be very important in the radical behaviorism of Skinner. Yet, it seems that this probability has not been accurately defined and is still ambiguous. I give a strict, relative frequency interpretation of probability and its applicability to the data from the science of behavior as supplied by cumulative records. Two examples of stochastic processes are given that may model the data from cumulative records that result under conditions of continuous reinforc...

  16. PROBABILITY AND STATISTICS.

    Science.gov (United States)

    STATISTICAL ANALYSIS, REPORTS), (*PROBABILITY, REPORTS), INFORMATION THEORY, DIFFERENTIAL EQUATIONS, STATISTICAL PROCESSES, STOCHASTIC PROCESSES, MULTIVARIATE ANALYSIS, DISTRIBUTION THEORY , DECISION THEORY, MEASURE THEORY, OPTIMIZATION

  17. Real analysis and probability

    CERN Document Server

    Ash, Robert B; Lukacs, E

    1972-01-01

    Real Analysis and Probability provides the background in real analysis needed for the study of probability. Topics covered range from measure and integration theory to functional analysis and basic concepts of probability. The interplay between measure theory and topology is also discussed, along with conditional probability and expectation, the central limit theorem, and strong laws of large numbers with respect to martingale theory.Comprised of eight chapters, this volume begins with an overview of the basic concepts of the theory of measure and integration, followed by a presentation of var

  18. Spatially resolved positron annihilation spectroscopy on friction stir weld induced defects.

    Science.gov (United States)

    Hain, Karin; Hugenschmidt, Christoph; Pikart, Philip; Böni, Peter

    2010-04-01

    A friction stir welded (FSW) Al alloy sample was investigated by Doppler broadening spectroscopy (DBS) of the positron annihilation line. The spatially resolved defect distribution showed that the material in the joint zone becomes completely annealed during the welding process at the shoulder of the FSW tool, whereas at the tip, annealing is prevailed by the deterioration of the material due to the tool movement. This might be responsible for the increased probability of cracking in the heat affected zone of friction stir welds. Examination of a material pairing of steel S235 and the Al alloy Silafont36 by coincident Doppler broadening spectroscopy (CDBS) indicates the formation of annealed steel clusters in the Al alloy component of the sample. The clear visibility of Fe in the CDB spectra is explained by the very efficient trapping at the interface between steel cluster and bulk.

  19. Spatially resolved positron annihilation spectroscopy on friction stir weld induced defects

    Directory of Open Access Journals (Sweden)

    Karin Hain, Christoph Hugenschmidt, Philip Pikart and Peter Böni

    2010-01-01

    Full Text Available A friction stir welded (FSW Al alloy sample was investigated by Doppler broadening spectroscopy (DBS of the positron annihilation line. The spatially resolved defect distribution showed that the material in the joint zone becomes completely annealed during the welding process at the shoulder of the FSW tool, whereas at the tip, annealing is prevailed by the deterioration of the material due to the tool movement. This might be responsible for the increased probability of cracking in the heat affected zone of friction stir welds. Examination of a material pairing of steel S235 and the Al alloy Silafont36 by coincident Doppler broadening spectroscopy (CDBS indicates the formation of annealed steel clusters in the Al alloy component of the sample. The clear visibility of Fe in the CDB spectra is explained by the very efficient trapping at the interface between steel cluster and bulk.

  20. Annihilation amplitudes and factorization in B to phi Kstar

    CERN Document Server

    Epele, L N; Szynkman, A

    2003-01-01

    We study the decay $B^\\pm\\to \\phi K^{\\ast\\pm}$, followed by the decay of the outgoing vector mesons into two pseudoscalars. The analysis of angular distributions of the decay products is shown to provide useful information about the annihilation contributions and possible tests of factorization.

  1. The Isotropic Radio Background and Annihilating Dark Matter

    CERN Document Server

    Hooper, Dan; Jeltema, Tesla E; Linden, Tim; Profumo, Stefano; Slatyer, Tracy R

    2012-01-01

    Observations by ARCADE-2 and other telescopes sensitive to low frequency radiation have revealed the presence of an isotropic radio background with a hard spectral index. The intensity of this observed background is found to exceed the flux predicted from astrophysical sources by a factor of approximately 5-6. In this article, we consider the possibility that annihilating dark matter particles provide the primary contribution to the observed isotropic radio background through the emission of synchrotron radiation from electron and positron annihilation products. For reasonable estimates of the magnetic fields present in clusters and galaxies, we find that dark matter could potentially account for the observed radio excess, but only if it annihilates mostly to electrons and/or muons, and only if it possesses a mass in the range of approximately 5-50 GeV. For such models, the annihilation cross section required to normalize the synchrotron signal to the observed excess is sigma v ~ (0.4-30) x 10^-26 cm^3/s, sim...

  2. On the Direct Detection of Dark Matter Annihilation

    CERN Document Server

    Cherry, John F; Shoemaker, Ian M

    2015-01-01

    We investigate the direct detection phenomenology of a class of dark matter (DM) models in which DM does not directly interact with nuclei, {but rather} the products of its annihilation do. When these annihilation products are very light compared to the DM mass, the scattering in direct detection experiments is controlled by relativistic kinematics. This results in a distinctive recoil spectrum, a non-standard and or even {\\it absent} annual modulation, and the ability to probe DM masses as low as a $\\sim$10 MeV. We use current LUX data to show that experimental sensitivity to thermal relic annihilation cross sections has already been reached in a class of models. Moreover, the compatibility of dark matter direct detection experiments can be compared directly in $E_{min}$ space without making assumptions about DM astrophysics. Lastly, when DM has direct couplings to nuclei, the limit from annihilation to relativistic particles in the Sun can be stronger than that of conventional non-relativistic direct detect...

  3. The HAWC Sensitivity to Dark Matter Annihilation and Decay

    Science.gov (United States)

    Yapici, Tolga; HAWC Collaboration

    2016-03-01

    The High Altitude Water Cherenkov (HAWC) Observatory is an extensive air shower array in the state of Puebla, Mexico at an altitude of 4100m. The HAWC observatory will perform an indirect search for dark matter via GeV-TeV photons resulting from dark matter annihilation and decay, including annihilation from extended dark matter sources. We consider the HAWC sensitivity to a subset of the sources, including the M31 galaxy, the Virgo cluster, and the Galactic center. We simulate the HAWC response to gamma rays from the sources in well-motivated dark matter annihilation channels. We show the limits HAWC can place on the dark matter cross-section or lifetime from these sources if gamma-ray excess is not observed. In particular, for dark matter annihilating into gauge bosons, HAWC will be able to measure a narrow range of dark matter masses to cross-sections below that expected for a thermal relic. HAWC should also be sensitive to cross-sections higher than thermal for masses up to nearly 1000 TeV. HAWC will be sensitive to decaying dark matter for these masses as well. HAWC can explore higher dark matter masses than are currently constrained.

  4. Interference fragmentation functions in electron-positron annihilation

    NARCIS (Netherlands)

    Boer, D; Jakob, R; Radici, M

    2003-01-01

    We study the process of electron-positron annihilation into back-to-back jets, where in each jet a pair of hadrons is detected. The orientation of these two pairs with respect to each other can be used to extract the interference fragmentation functions in a clean way: for instance, from BELLE or BA

  5. On the Direct Detection of Dark Matter Annihilation

    DEFF Research Database (Denmark)

    Cherry, John F.; Frandsen, Mads T.; Shoemaker, Ian M.

    2015-01-01

    experiments is controlled by relativistic kinematics. This results in a distinctive recoil spectrum, a non-standard and or even absent annual modulation, and the ability to probe DM masses as low as a $\\sim$10 MeV. We use current LUX data to show that experimental sensitivity to thermal relic annihilation...

  6. Positron Annihilation in a Rubber Modified Epoxy Resin

    DEFF Research Database (Denmark)

    Mogensen, O. E.; Jacobsen, F. M.; Pethrick, R. A.

    1979-01-01

    Positron annihilation data is reported on a rubber-modified epoxy resin. Studies of the temperature dependence of the o-positronium lifetime indicated the existence of three distinct regions; the associated transition temperatures by comparison with dilatometric data can be ascribed respectively...

  7. $\\overline{p}p$ annihilation into four charged pions at rest and in flight

    CERN Document Server

    Salvini, P; Filippini, V; Fontana, A; Montagna, P; Panzarasa, A; Rotondi, A; Bargiotti, M; Bertin, A; Bruschi, M; Capponi, M; Carbone, A; De Castro, S; Fabbri, Franco Luigi; Faccioli, P; Galli, D; Giacobbe, B; Grimaldi, F; Marconi, U; Massa, I; Piccinini, M; Cesari, N S; Spighi, R; Vecchi, S; Villa, M; Vitale, A; Zoccoli, A; Bianconi, A; Corradini, M; Donzella, A; Lodi-Rizzini, E; Venturelli, L; Zenoni, A; Cicalò, C; De Falco, A; Masoni, A; Puddu, G; Serci, S; Usai, G L; Gorchakov, O E; Prakhov, S N; Rozhdestvensky, A M; Sapozhnikov, M G; Poli, M; Gianotti, P; Guaraldo, C; Lanaro, A; Lucherini, V; Petrascu, C; Panzieri, D; Balestra, F; Bussa, M P; Busso, L; Cerello, P; Denisov, O; Ferrero, L; Grasso, A; Maggiora, A; Tosello, F; Botta, E; Bressani, Tullio; Calvo, D; De Mori, F; Feliciello, A; Filippi, A; Marcello, S; Agnello, M; Iazzi, F

    2004-01-01

    The spin-parity analysis of the data on the pp to 2 pi /sup +/ 2 pi /sup $/annihilation reaction at rest in liquid and in gaseous hydrogen at 3 bar pressure and in flight at p momentum of approximately= 50 MeV/c, collected by the Obelix spectrometer at the LEAR complex of CERN, is presented. The relative branching ratios (a /sub 1/ (1260) to sigma pi )/(a/sub 1/(1260) to rho pi ) = 0.06 +or- 0.05 and ( pi (1300) to sigma pi )/( pi (1300) to rho pi ) = 2.2 +or- 0.4 are obtained. It is also shown that the inclusion of the exotic meson pi /sub 1/(1400), J/sup PC/ = 1/sup -+/, mass and width M = 1.384 +or- 0.028, Gamma = 0.378 +or- 0.058 GeV/c/sup 2/, in its decay to rho pi , improves the fit and some implications of these results are briefly discussed. The relative S and P-wave annihilation percentages in four charged pions at two target densities are obtained. (53 refs).

  8. Photon from the annihilation process with CGC in the pA collision

    Science.gov (United States)

    Benić, Sanjin; Fukushima, Kenji

    2017-02-01

    We discuss the photon production in the pA collision in a framework of the color glass condensate (CGC) with expansion in terms of the proton color source ρp. We work in a regime where the color density ρA of the nucleus is large enough to justify the CGC treatment, while soft gluons in the proton could be dominant over quark components but do not yet belong to the CGC regime, so that we can still expand the amplitude in powers of ρp. The zeroth-order contribution to the photon production is known to appear from the Bremsstrahlung process and the first-order corrections consist of the Bremsstrahlung diagrams with pair produced quarks and the annihilation diagrams of quarks involving a gluon sourced by ρp. Because the final states are different there is no interference between these two processes. In this work we elucidate calculation procedures in details focusing on the annihilation diagrams only. Using the McLerran-Venugopalan model for the color average we numerically calculate the photon production rate and discuss functional forms that fit the numerical results.

  9. A New Signature of Dark Matter Annihilations: Gamma-Rays from Intermediate-Mass Black Holes

    CERN Document Server

    Bertone, Gianfranco; Silk, J; Bertone, Gianfranco; Zentner, Andrew R.; Silk, Joseph

    2005-01-01

    We study the prospects for detecting gamma-rays from Dark Matter (DM) annihilations in enhancements of the DM density (mini-spikes) around intermediate-mass black holes with masses in the range $10^2 \\lsim M / \\msun \\lsim 10^6$. Focusing on two different IMBH formation scenarios, we show that, for typical values of mass and cross section of common DM candidates, mini-spikes, produced by the adiabatic growth of DM around pregalactic IMBHs, would be bright sources of gamma-rays, which could be easily detected with large field-of-view gamma-ray experiments such as GLAST, and further studied with smaller field-of-view, larger-area experiments like Air Cherenkov Telescopes CANGAROO, HESS, MAGIC and VERITAS. The detection of many gamma-ray sources not associated with a luminous component of the Local Group, and with identical cut-offs in their energy spectra at the mass of the DM particle, would provide a potential smoking-gun signature of DM annihilations and shed new light on the nature of intermediate and superm...

  10. Constraints on dark matter annihilations from diffuse gamma-ray emission in the Galaxy

    CERN Document Server

    Tavakoli, Maryam; Evoli, Carmelo; Ullio, Piero

    2014-01-01

    Recent advances in gamma-ray cosmic ray, infrared and radio astronomy have allowed us to develop a significantly better understanding of the galactic medium properties in the last few years. In this work using the DRAGON code, that numerically solves the CR propagation equation and calculating gamma-ray emissivities in a 2-dimensional grid enclosing the Galaxy, we study in a self consistent manner models for the galactic diffuse gamma-ray emission. Our models are cross-checked to both the available CR and gamma-ray data. We address the extend to which dark matter annihilations in the Galaxy can contribute to the diffuse gamma-ray flux towards different directions on the sky. Moreover we discuss the impact that astrophysical uncertainties of non DM nature, have on the derived gamma-ray limits. Such uncertainties are related to the diffusion properties on the Galaxy, the interstellar gas and the interstellar radiation field energy densities. Light ~10 GeV dark matter annihilating dominantly to hadrons is more s...

  11. Detection of SUSY Signals in Stau Neutralino Co-annihilation Region at the LHC

    CERN Document Server

    Arnowitt, R; Dutta, B; Kamon, T; Korev, N; Simeon, P; Toback, D; Wagner, P

    2007-01-01

    We study the prospects of detecting the signal in the stau neutralino co-annihilation region at the LHC using tau leptons. The co-annihilation signal is characterized by the stau and neutralino mass difference (dM) to be 5-15 GeV to be consistent with the WMAP measurement of the cold dark matter relic density as well as all other experimental bounds within the minimal supergravity model. Focusing on tau's from neutralino_2 --> tau stau --> tau tau neutralino_1 decays in gluino and squark production, we consider inclusive MET+jet+3tau production, with two tau's above a high E_T threshold and a third tau above a lower threshold. Two observables, the number of opposite-signed tau pairs minus the number of like-signed tau pairs and the peak position of the di-tau invariant mass distribution, allow for the simultaneous determination of dM and M_gluino. For dM = 9 GeV and M_gluino = 850 GeV with 30 fb^-1 of data, we can measure dM to 15% and $M_gluino to 6%.

  12. Detection of SUSY Signals in Stau Neutralino Co-Annihilation Region at the LHC

    Science.gov (United States)

    Arnowitt, R.; Aurisano, A.; Dutta, B.; Kamon, T.; Kolev, N.; Simeon, P.; Toback, D.; Wagner, P.

    2007-04-01

    We study the prospects of detecting the signal in the stau neutralino co-annihilation region at the LHC using tau leptons. The co-annihilation signal is characterized by the stau and neutralino mass difference (dM) to be 5-15 GeV to be consistent with the WMAP measurement of the cold dark matter relic density as well as all other experimental bounds within the minimal supergravity model. Focusing on tau's from neutralino_2 --> tau stau --> tau tau neutralino_1 decays in gluino and squark production, we consider inclusive MET+jet+3tau production, with two tau's above a high E_T threshold and a third tau above a lower threshold. Two observables, the number of opposite-signed tau pairs minus the number of like-signed tau pairs and the peak position of the di-tau invariant mass distribution, allow for the simultaneous determination of dM and M_gluino. For dM = 9 GeV and M_gluino = 850 GeV with 30 fb^-1 of data, we can measure dM to 15% and M_gluino to 6%.

  13. Dark matter annihilation radiation in hydrodynamic simulations of Milky Way haloes

    CERN Document Server

    Schaller, Matthieu; Theuns, Tom; Calore, Francesca; Bertone, Gianfranco; Bozorgnia, Nassim; Crain, Robert A; Fattahi, Azadeh; Navarro, Julio F; Sawala, Till; Schaye, Joop

    2015-01-01

    We obtain predictions for the properties of cold dark matter annihilation radiation using high resolution hydrodynamic zoom-in cosmological simulations of Milky Way-like galaxies carried out as part of the "Evolution and Assembly of GaLaxies and their Environments" (EAGLE) programme. Galactic halos in the simulation have significantly different properties from those assumed by the "standard halo model" often used in dark matter detection studies. The formation of the galaxy causes a contraction of the dark matter halo, whose density profile develops a steeper slope than the Navarro-Frenk-White profile between $r\\approx1.5~\\rm{kpc}$ and $r\\approx10~\\rm{kpc}$, and a flatter slope at smaller radii. The inner regions of the halos are almost perfectly spherical (axis ratios $b/a > 0.96$ within $r=500~\\rm{pc}$) and there is no offset larger than $45~\\rm{pc}$ between the centre of the stellar distribution and the centre of the dark halo. The morphology of the predicted dark matter annihilation radiation signal is in...

  14. Search for Photon-Linelike Signatures from Dark Matter Annihilations with H.E.S.S.

    Science.gov (United States)

    Abramowski, A.; Acero, F.; Aharonian, F.; Akhperjanian, A. G.; Anton, G.; Balenderan, S.; Balzer, A.; Barnacka, A.; Becherini, Y.; Becker Tjus, J.; Bernlöhr, K.; Birsin, E.; Biteau, J.; Bochow, A.; Boisson, C.; Bolmont, J.; Bordas, P.; Brucker, J.; Brun, F.; Brun, P.; Bulik, T.; Carrigan, S.; Casanova, S.; Cerruti, M.; Chadwick, P. M.; Chaves, R. C. G.; Cheesebrough, A.; Colafrancesco, S.; Cologna, G.; Conrad, J.; Couturier, C.; Dalton, M.; Daniel, M. K.; Davids, I. D.; Degrange, B.; Deil, C.; deWilt, P.; Dickinson, H. J.; Djannati-Ataï, A.; Domainko, W.; Drury, L. O.'C.; Dubus, G.; Dutson, K.; Dyks, J.; Dyrda, M.; Egberts, K.; Eger, P.; Espigat, P.; Fallon, L.; Farnier, C.; Fegan, S.; Feinstein, F.; Fernandes, M. V.; Fernandez, D.; Fiasson, A.; Fontaine, G.; Förster, A.; Füßling, M.; Gajdus, M.; Gallant, Y. A.; Garrigoux, T.; Gast, H.; Giebels, B.; Glicenstein, J. F.; Glück, B.; Göring, D.; Grondin, M.-H.; Häffner, S.; Hague, J. D.; Hahn, J.; Hampf, D.; Harris, J.; Heinz, S.; Heinzelmann, G.; Henri, G.; Hermann, G.; Hillert, A.; Hinton, J. A.; Hofmann, W.; Hofverberg, P.; Holler, M.; Horns, D.; Jacholkowska, A.; Jahn, C.; Jamrozy, M.; Jung, I.; Kastendieck, M. A.; Katarzyński, K.; Katz, U.; Kaufmann, S.; Khélifi, B.; Klepser, S.; Klochkov, D.; Kluźniak, W.; Kneiske, T.; Komin, Nu.; Kosack, K.; Kossakowski, R.; Krayzel, F.; Krüger, P. P.; Laffon, H.; Lamanna, G.; Lefaucheur, J.; Lemoine-Goumard, M.; Lenain, J.-P.; Lennarz, D.; Lohse, T.; Lopatin, A.; Lu, C.-C.; Marandon, V.; Marcowith, A.; Masbou, J.; Maurin, G.; Maxted, N.; Mayer, M.; McComb, T. J. L.; Medina, M. C.; Méhault, J.; Menzler, U.; Moderski, R.; Mohamed, M.; Moulin, E.; Naumann, C. L.; Naumann-Godo, M.; de Naurois, M.; Nedbal, D.; Nekrassov, D.; Nguyen, N.; Niemiec, J.; Nolan, S. J.; Ohm, S.; de Oña Wilhelmi, E.; Opitz, B.; Ostrowski, M.; Oya, I.; Panter, M.; Parsons, R. D.; Paz Arribas, M.; Pekeur, N. W.; Pelletier, G.; Perez, J.; Petrucci, P.-O.; Peyaud, B.; Pita, S.; Pühlhofer, G.; Punch, M.; Quirrenbach, A.; Raue, M.; Reimer, A.; Reimer, O.; Renaud, M.; de los Reyes, R.; Rieger, F.; Ripken, J.; Rob, L.; Rosier-Lees, S.; Rowell, G.; Rudak, B.; Rulten, C. B.; Sahakian, V.; Sanchez, D. A.; Santangelo, A.; Schlickeiser, R.; Schulz, A.; Schwanke, U.; Schwarzburg, S.; Schwemmer, S.; Sheidaei, F.; Skilton, J. L.; Sol, H.; Spengler, G.; Stawarz, Ł.; Steenkamp, R.; Stegmann, C.; Stinzing, F.; Stycz, K.; Sushch, I.; Szostek, A.; Tavernet, J.-P.; Terrier, R.; Tluczykont, M.; Trichard, C.; Valerius, K.; van Eldik, C.; Vasileiadis, G.; Venter, C.; Viana, A.; Vincent, P.; Völk, H. J.; Volpe, F.; Vorobiov, S.; Vorster, M.; Wagner, S. J.; Ward, M.; White, R.; Wierzcholska, A.; Wouters, D.; Zacharias, M.; Zajczyk, A.; Zdziarski, A. A.; Zech, A.; Zechlin, H.-S.

    2013-01-01

    Gamma-ray line signatures can be expected in the very-high-energy (Eγ>100GeV) domain due to self-annihilation or decay of dark matter (DM) particles in space. Such a signal would be readily distinguishable from astrophysical γ-ray sources that in most cases produce continuous spectra that span over several orders of magnitude in energy. Using data collected with the H.E.S.S. γ-ray instrument, upper limits on linelike emission are obtained in the energy range between ˜500GeV and ˜25TeV for the central part of the Milky Way halo and for extragalactic observations, complementing recent limits obtained with the Fermi-LAT instrument at lower energies. No statistically significant signal could be found. For monochromatic γ-ray line emission, flux limits of (2×10-7-2×10-5)m-2s-1sr-1 and (1×10-8-2×10-6)m-2s-1sr-1 are obtained for the central part of the Milky Way halo and extragalactic observations, respectively. For a DM particle mass of 1 TeV, limits on the velocity-averaged DM annihilation cross section ⟨σv⟩χχ→γγ reach ˜10-27cm3s-1, based on the Einasto parametrization of the Galactic DM halo density profile.

  15. Cosmological and astrophysical signatures of dark matter annihilations into pseudo-Goldstone bosons

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Cely, Camilo; Ibarra, Alejandro; Molinaro, Emiliano, E-mail: camilo.garcia@tum.de, E-mail: alejandro.ibarra@ph.tum.de, E-mail: emiliano.molinaro@tum.de [Physik-Department T30d, Technische Universität München, James-Franck-Straße, Garching, 85748 (Germany)

    2014-02-01

    We investigate a model where the dark matter particle is a chiral fermion field charged under a global U(1) symmetry which is assumed to be spontaneously broken, leading to a pseudo-Goldstone boson (PGB). We argue that the dark matter annihilation into PGBs determine the dark matter relic abundance. Besides, we also note that experimental searches for PGBs allow either for a very long lived PGB, with a lifetime much longer than the age of the Universe, or a relatively short lived PGB, with a lifetime shorter than one minute. Hence, two different scenarios arise, producing very different signatures. In the long lived PGB scenario, the PGB might contribute significantly to the radiation energy density of the Universe. On the other hand, in the short lived PGB scenario, and since the decay length is shorter than one parsec, the s-wave annihilation into a PGB and a CP even dark scalar in the Galactic center might lead to an intense box feature in the gamma-ray energy spectrum, provided the PGB decay branching ratio into two photons is sizable. We also analyze the constraints on these two scenarios from thermal production, the Higgs invisible decay width and direct dark matter searches.

  16. Search for photon-linelike signatures from dark matter annihilations with H.E.S.S.

    Science.gov (United States)

    Abramowski, A; Acero, F; Aharonian, F; Akhperjanian, A G; Anton, G; Balenderan, S; Balzer, A; Barnacka, A; Becherini, Y; Becker Tjus, J; Bernlöhr, K; Birsin, E; Biteau, J; Bochow, A; Boisson, C; Bolmont, J; Bordas, P; Brucker, J; Brun, F; Brun, P; Bulik, T; Carrigan, S; Casanova, S; Cerruti, M; Chadwick, P M; Chaves, R C G; Cheesebrough, A; Colafrancesco, S; Cologna, G; Conrad, J; Couturier, C; Dalton, M; Daniel, M K; Davids, I D; Degrange, B; Deil, C; deWilt, P; Dickinson, H J; Djannati-Ataï, A; Domainko, W; Drury, L O'C; Dubus, G; Dutson, K; Dyks, J; Dyrda, M; Egberts, K; Eger, P; Espigat, P; Fallon, L; Farnier, C; Fegan, S; Feinstein, F; Fernandes, M V; Fernandez, D; Fiasson, A; Fontaine, G; Förster, A; Füßling, M; Gajdus, M; Gallant, Y A; Garrigoux, T; Gast, H; Giebels, B; Glicenstein, J F; Glück, B; Göring, D; Grondin, M-H; Häffner, S; Hague, J D; Hahn, J; Hampf, D; Harris, J; Heinz, S; Heinzelmann, G; Henri, G; Hermann, G; Hillert, A; Hinton, J A; Hofmann, W; Hofverberg, P; Holler, M; Horns, D; Jacholkowska, A; Jahn, C; Jamrozy, M; Jung, I; Kastendieck, M A; Katarzyński, K; Katz, U; Kaufmann, S; Khélifi, B; Klepser, S; Klochkov, D; Kluźniak, W; Kneiske, T; Komin, Nu; Kosack, K; Kossakowski, R; Krayzel, F; Krüger, P P; Laffon, H; Lamanna, G; Lefaucheur, J; Lemoine-Goumard, M; Lenain, J-P; Lennarz, D; Lohse, T; Lopatin, A; Lu, C-C; Marandon, V; Marcowith, A; Masbou, J; Maurin, G; Maxted, N; Mayer, M; McComb, T J L; Medina, M C; Méhault, J; Menzler, U; Moderski, R; Mohamed, M; Moulin, E; Naumann, C L; Naumann-Godo, M; de Naurois, M; Nedbal, D; Nekrassov, D; Nguyen, N; Niemiec, J; Nolan, S J; Ohm, S; de Oña Wilhelmi, E; Opitz, B; Ostrowski, M; Oya, I; Panter, M; Parsons, R D; Paz Arribas, M; Pekeur, N W; Pelletier, G; Perez, J; Petrucci, P-O; Peyaud, B; Pita, S; Pühlhofer, G; Punch, M; Quirrenbach, A; Raue, M; Reimer, A; Reimer, O; Renaud, M; de Los Reyes, R; Rieger, F; Ripken, J; Rob, L; Rosier-Lees, S; Rowell, G; Rudak, B; Rulten, C B; Sahakian, V; Sanchez, D A; Santangelo, A; Schlickeiser, R; Schulz, A; Schwanke, U; Schwarzburg, S; Schwemmer, S; Sheidaei, F; Skilton, J L; Sol, H; Spengler, G; Stawarz, L; Steenkamp, R; Stegmann, C; Stinzing, F; Stycz, K; Sushch, I; Szostek, A; Tavernet, J-P; Terrier, R; Tluczykont, M; Trichard, C; Valerius, K; van Eldik, C; Vasileiadis, G; Venter, C; Viana, A; Vincent, P; Völk, H J; Volpe, F; Vorobiov, S; Vorster, M; Wagner, S J; Ward, M; White, R; Wierzcholska, A; Wouters, D; Zacharias, M; Zajczyk, A; Zdziarski, A A; Zech, A; Zechlin, H-S

    2013-01-25

    Gamma-ray line signatures can be expected in the very-high-energy (E(γ)>100 GeV) domain due to self-annihilation or decay of dark matter (DM) particles in space. Such a signal would be readily distinguishable from astrophysical γ-ray sources that in most cases produce continuous spectra that span over several orders of magnitude in energy. Using data collected with the H.E.S.S. γ-ray instrument, upper limits on linelike emission are obtained in the energy range between ∼ 500 GeV and ∼ 25 TeV for the central part of the Milky Way halo and for extragalactic observations, complementing recent limits obtained with the Fermi-LAT instrument at lower energies. No statistically significant signal could be found. For monochromatic γ-ray line emission, flux limits of (2 × 10(-7) -2 × 10(-5)) m(-2) s(-1) sr(-1) and (1 × 10(-8) -2 × 10(-6)) m(-2) s(-1)sr(-1) are obtained for the central part of the Milky Way halo and extragalactic observations, respectively. For a DM particle mass of 1 TeV, limits on the velocity-averaged DM annihilation cross section ⟨σv⟩(χχ → γγ) reach ∼ 10(-27) cm(3)s(-1), based on the Einasto parametrization of the Galactic DM halo density profile.

  17. Choice probability generating functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel

    2013-01-01

    This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice prob...

  18. Introduction to probability

    CERN Document Server

    Freund, John E

    1993-01-01

    Thorough, lucid coverage of permutations and factorials, probabilities and odds, frequency interpretation, mathematical expectation, decision making, postulates of probability, rule of elimination, binomial distribution, geometric distribution, standard deviation, law of large numbers, and much more. Exercises with some solutions. Summary. Bibliography. Includes 42 black-and-white illustrations. 1973 edition.

  19. On Quantum Conditional Probability

    Directory of Open Access Journals (Sweden)

    Isabel Guerra Bobo

    2013-02-01

    Full Text Available We argue that quantum theory does not allow for a generalization of the notion of classical conditional probability by showing that the probability defined by the Lüders rule, standardly interpreted in the literature as the quantum-mechanical conditionalization rule, cannot be interpreted as such.

  20. Choice Probability Generating Functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel L; Bierlaire, Michel

    This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...

  1. Probability, Nondeterminism and Concurrency

    DEFF Research Database (Denmark)

    Varacca, Daniele

    Nondeterminism is modelled in domain theory by the notion of a powerdomain, while probability is modelled by that of the probabilistic powerdomain. Some problems arise when we want to combine them in order to model computation in which both nondeterminism and probability are present. In particula...

  2. Exact Probability Distribution versus Entropy

    Directory of Open Access Journals (Sweden)

    Kerstin Andersson

    2014-10-01

    Full Text Available The problem  addressed concerns the determination of the average number of successive attempts  of guessing  a word of a certain  length consisting of letters with given probabilities of occurrence. Both first- and second-order approximations  to a natural language are considered.  The guessing strategy used is guessing words in decreasing order of probability. When word and alphabet sizes are large, approximations  are necessary in order to estimate the number of guesses.  Several  kinds of approximations  are discussed demonstrating moderate requirements regarding both memory and central processing unit (CPU time. When considering realistic  sizes of alphabets and words (100, the number of guesses can be estimated  within minutes with reasonable accuracy (a few percent and may therefore constitute an alternative to, e.g., various entropy expressions.  For many probability  distributions,  the density of the logarithm of probability products is close to a normal distribution. For those cases, it is possible to derive an analytical expression for the average number of guesses. The proportion  of guesses needed on average compared to the total number  decreases almost exponentially with the word length. The leading term in an asymptotic  expansion can be used to estimate the number of guesses for large word lengths. Comparisons with analytical lower bounds and entropy expressions are also provided.

  3. Spatial correlation properties and phase singularity annihilation of Gaussian Schell-model beams in the focal region

    Institute of Scientific and Technical Information of China (English)

    Liu Pu-Sheng; Pan Liu-Zhan; Lü Bai-Da

    2008-01-01

    By using the generalized Debye diffraction integral,this paper studies the spatial correlation properties and phase singularity annihilation of apertured Gaussian Schell-model (GSM) beams in the focal region.It is shown that the width of the spectral degree of coherence can be larger,less than or equal to the corresponding width of spectral density,which depends not only on the scalar coherence length of the beams,but also on the truncation parameter.With a gradual increase of the truncation parameter,a pair of phase singularities of the spectral degree of coherence in the focal plane approaches each other,resulting in subwavelength structures.Finally,the annihilation of pairs of phase singularities takes place at a certain value of the truncation parameter.With increasing scalar coherence length,the annihilation occurs at the larger truncation parameter.However,the creation process of phase singularities outside the focal plane is not found for GSM beams.

  4. Does the gamma-ray signal from the central Milky Way indicate Sommerfeld enhancement of dark matter annihilation?

    Science.gov (United States)

    Chan, Man-Ho

    2016-10-01

    Recently, some studies showed that the GeV gamma-ray excess signal from the central Milky Way can be explained by the annihilation of ∼ 40 GeV dark matter through the bb¯ channel. Based on the morphology of the gamma-ray flux, the best-fit inner slope of the dark matter density profile is γ = 1.26. However, recent analyses of the Milky Way dark matter profile favor γ = 0.6 – 0.8. In this article, we show that the GeV gamma-ray excess can also be explained by the Sommerfeld-enhanced dark matter annihilation through the bb¯ channel with γ = 0.85 – 1.05. We constrain the parameters of the Sommerfeld-enhanced annihilation by using data from Fermi-LAT. We also show that the predicted gamma-ray fluxes emitted from dwarf galaxies generally satisfy recent upper limits on gamma-ray fluxes detected by Fermi-LAT.

  5. Does the gamma-ray signal from the central Milky Way indicate Sommerfeld enhancement of dark matter annihilation?

    Science.gov (United States)

    Chan, Man-Ho

    2016-10-01

    Recently, some studies showed that the GeV gamma-ray excess signal from the central Milky Way can be explained by the annihilation of ˜ 40 GeV dark matter through the bb¯ channel. Based on the morphology of the gamma-ray flux, the best-fit inner slope of the dark matter density profile is γ = 1.26. However, recent analyses of the Milky Way dark matter profile favor γ = 0.6 – 0.8. In this article, we show that the GeV gamma-ray excess can also be explained by the Sommerfeld-enhanced dark matter annihilation through the bb¯ channel with γ = 0.85 – 1.05. We constrain the parameters of the Sommerfeld-enhanced annihilation by using data from Fermi-LAT. We also show that the predicted gamma-ray fluxes emitted from dwarf galaxies generally satisfy recent upper limits on gamma-ray fluxes detected by Fermi-LAT.

  6. Unified optical-model approach to low-energy antiproton annihilation on nuclei and to antiprotonic atoms

    Science.gov (United States)

    Batty, C. J.; Friedman, E.; Gal, A.

    2001-07-01

    A successful unified description of p¯ nuclear interactions near E=0 is achieved using a p¯ optical potential within a folding model, V opt˜ v¯∗ρ , where a p¯p potential v¯ is folded with the nuclear density ρ. The potential v¯ fits very well the measured p¯p -annihilation cross sections at low energies ( p L10, as well as the few low-energy p¯-annihilation cross sections measured on Ne. Both v¯ and V opt are found to be highly absorptive, which leads to a saturation of reaction cross sections in hydrogen and on nuclei. Predictions are made for p¯-annihilation cross sections over the entire periodic table at these very low energies and the systematics of the calculated cross sections as function of A, Z and E is discussed and explained in terms of a Coulomb-modified strong-absorption model. Finally, optical potentials which fit simultaneously low-energy p¯- 4He observables for E0 are used to assess the reliability of extracting Coulomb modified p¯ nuclear scattering lengths directly from the data. The relationship between different kinds of scattering lengths is discussed and previously published systematics of the p¯ nuclear scattering lengths is updated.

  7. Neutrino flavor ratios as diagnostic of solar WIMP annihilation

    Science.gov (United States)

    Lehnert, Ralf; Weiler, Thomas J.

    2008-06-01

    We consider the neutrino (and antineutrino) flavors arriving at the Earth for neutrinos produced in the annihilation of weakly interacting massive particles (WIMPs) in the sun’s core. Solar-matter effects on the flavor propagation of the resulting ≳GeV neutrinos are studied analytically within a density-matrix formalism. Matter effects, including mass-state level crossings, influence the flavor fluxes considerably. The exposition herein is somewhat pedagogical, in that it starts with adiabatic evolution of single flavors from the sun’s center, with θ13 set to zero, and progresses to fully realistic processing of the flavor ratios expected in WIMP decay, from the sun’s core to the Earth. In the fully realistic calculation, nonadiabatic level crossing is included, as are possible nonzero values for θ13 and the CP-violating phase δ. Because of resonance enhancement in matter, nonzero values of θ13 even smaller than a degree can noticeably affect flavor propagation. Both normal and inverted neutrino-mass hierarchies are considered. Our main conclusion is that measuring flavor ratios (in addition to energy spectra) of ≳GeV solar neutrinos can provide discrimination between WIMP models. In particular, we demonstrate the flavor differences at the Earth for neutrinos from the two main classes of WIMP final states, namely W+W- and 95%bb¯+5%τ+τ-. Conversely, if WIMP properties were to be learned from production in future accelerators, then the flavor ratios of ≳GeV solar neutrinos might be useful for inferring θ13 and the mass hierarchy. From the full calculations, we find (and prove) some general features: a flavor-democratic flux produced at the sun’s core arrives at the Earth still flavor democratic; for maximal θ32 but arbitrary θ21 and θ13, the replacement δ→π-δ leaves the νe flavor spectra unaltered but interchanges νμ and ντ spectra at the Earth; and, only for neutrinos in the inverted hierarchy and antineutrinos in the normal

  8. Probability distributions for multimeric systems.

    Science.gov (United States)

    Albert, Jaroslav; Rooman, Marianne

    2016-01-01

    We propose a fast and accurate method of obtaining the equilibrium mono-modal joint probability distributions for multimeric systems. The method necessitates only two assumptions: the copy number of all species of molecule may be treated as continuous; and, the probability density functions (pdf) are well-approximated by multivariate skew normal distributions (MSND). Starting from the master equation, we convert the problem into a set of equations for the statistical moments which are then expressed in terms of the parameters intrinsic to the MSND. Using an optimization package on Mathematica, we minimize a Euclidian distance function comprising of a sum of the squared difference between the left and the right hand sides of these equations. Comparison of results obtained via our method with those rendered by the Gillespie algorithm demonstrates our method to be highly accurate as well as efficient.

  9. Probability and Measure

    CERN Document Server

    Billingsley, Patrick

    2012-01-01

    Praise for the Third Edition "It is, as far as I'm concerned, among the best books in math ever written....if you are a mathematician and want to have the top reference in probability, this is it." (Amazon.com, January 2006) A complete and comprehensive classic in probability and measure theory Probability and Measure, Anniversary Edition by Patrick Billingsley celebrates the achievements and advancements that have made this book a classic in its field for the past 35 years. Now re-issued in a new style and format, but with the reliable content that the third edition was revered for, this

  10. Probability and Bayesian statistics

    CERN Document Server

    1987-01-01

    This book contains selected and refereed contributions to the "Inter­ national Symposium on Probability and Bayesian Statistics" which was orga­ nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa­ pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub­ jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...

  11. Probability in physics

    CERN Document Server

    Hemmo, Meir

    2012-01-01

    What is the role and meaning of probability in physical theory, in particular in two of the most successful theories of our age, quantum physics and statistical mechanics? Laws once conceived as universal and deterministic, such as Newton‘s laws of motion, or the second law of thermodynamics, are replaced in these theories by inherently probabilistic laws. This collection of essays by some of the world‘s foremost experts presents an in-depth analysis of the meaning of probability in contemporary physics. Among the questions addressed are: How are probabilities defined? Are they objective or subjective? What is their  explanatory value? What are the differences between quantum and classical probabilities? The result is an informative and thought-provoking book for the scientifically inquisitive. 

  12. Probability an introduction

    CERN Document Server

    Grimmett, Geoffrey

    2014-01-01

    Probability is an area of mathematics of tremendous contemporary importance across all aspects of human endeavour. This book is a compact account of the basic features of probability and random processes at the level of first and second year mathematics undergraduates and Masters' students in cognate fields. It is suitable for a first course in probability, plus a follow-up course in random processes including Markov chains. A special feature is the authors' attention to rigorous mathematics: not everything is rigorous, but the need for rigour is explained at difficult junctures. The text is enriched by simple exercises, together with problems (with very brief hints) many of which are taken from final examinations at Cambridge and Oxford. The first eight chapters form a course in basic probability, being an account of events, random variables, and distributions - discrete and continuous random variables are treated separately - together with simple versions of the law of large numbers and the central limit th...

  13. Probabilities in physics

    CERN Document Server

    Hartmann, Stephan

    2011-01-01

    Many results of modern physics--those of quantum mechanics, for instance--come in a probabilistic guise. But what do probabilistic statements in physics mean? Are probabilities matters of objective fact and part of the furniture of the world, as objectivists think? Or do they only express ignorance or belief, as Bayesians suggest? And how are probabilistic hypotheses justified and supported by empirical evidence? Finally, what does the probabilistic nature of physics imply for our understanding of the world? This volume is the first to provide a philosophical appraisal of probabilities in all of physics. Its main aim is to make sense of probabilistic statements as they occur in the various physical theories and models and to provide a plausible epistemology and metaphysics of probabilities. The essays collected here consider statistical physics, probabilistic modelling, and quantum mechanics, and critically assess the merits and disadvantages of objectivist and subjectivist views of probabilities in these fie...

  14. Probability and Statistical Inference

    OpenAIRE

    Prosper, Harrison B.

    2006-01-01

    These lectures introduce key concepts in probability and statistical inference at a level suitable for graduate students in particle physics. Our goal is to paint as vivid a picture as possible of the concepts covered.

  15. Quantum computing and probability.

    Science.gov (United States)

    Ferry, David K

    2009-11-25

    Over the past two decades, quantum computing has become a popular and promising approach to trying to solve computationally difficult problems. Missing in many descriptions of quantum computing is just how probability enters into the process. Here, we discuss some simple examples of how uncertainty and probability enter, and how this and the ideas of quantum computing challenge our interpretations of quantum mechanics. It is found that this uncertainty can lead to intrinsic decoherence, and this raises challenges for error correction.

  16. Monte Carlo transition probabilities

    OpenAIRE

    Lucy, L. B.

    2001-01-01

    Transition probabilities governing the interaction of energy packets and matter are derived that allow Monte Carlo NLTE transfer codes to be constructed without simplifying the treatment of line formation. These probabilities are such that the Monte Carlo calculation asymptotically recovers the local emissivity of a gas in statistical equilibrium. Numerical experiments with one-point statistical equilibrium problems for Fe II and Hydrogen confirm this asymptotic behaviour. In addition, the re...

  17. POSITRON ANNIHILATION AND CONDUCTIVITY MEASUREMENTS ON POLYANILINE

    Institute of Scientific and Technical Information of China (English)

    彭治林; 刘皓; 等

    1994-01-01

    The positron lifetime spectra and electrical conductivities have been measured for polyaniline as a function of protonation level ([H+] from 10-7-100.8 mol/L)。We observed that (1) the short lifetime τ1,which was related to electron density in bulk,decreased with the protonation level;(2) the intermediate lifetime τ2≈360ps,almost remaining constant,whereas its intensity I2 increased with increasing protonation level which was related to the conductivity of meaterial.These results are discussed in terms of conducting island model.

  18. Integrated statistical modelling of spatial landslide probability

    Science.gov (United States)

    Mergili, M.; Chu, H.-J.

    2015-09-01

    Statistical methods are commonly employed to estimate spatial probabilities of landslide release at the catchment or regional scale. Travel distances and impact areas are often computed by means of conceptual mass point models. The present work introduces a fully automated procedure extending and combining both concepts to compute an integrated spatial landslide probability: (i) the landslide inventory is subset into release and deposition zones. (ii) We employ a simple statistical approach to estimate the pixel-based landslide release probability. (iii) We use the cumulative probability density function of the angle of reach of the observed landslide pixels to assign an impact probability to each pixel. (iv) We introduce the zonal probability i.e. the spatial probability that at least one landslide pixel occurs within a zone of defined size. We quantify this relationship by a set of empirical curves. (v) The integrated spatial landslide probability is defined as the maximum of the release probability and the product of the impact probability and the zonal release probability relevant for each pixel. We demonstrate the approach with a 637 km2 study area in southern Taiwan, using an inventory of 1399 landslides triggered by the typhoon Morakot in 2009. We observe that (i) the average integrated spatial landslide probability over the entire study area corresponds reasonably well to the fraction of the observed landside area; (ii) the model performs moderately well in predicting the observed spatial landslide distribution; (iii) the size of the release zone (or any other zone of spatial aggregation) influences the integrated spatial landslide probability to a much higher degree than the pixel-based release probability; (iv) removing the largest landslides from the analysis leads to an enhanced model performance.

  19. Long range hops and the pair annihilation reaction A+A-->0: renormalization group and simulation.

    Science.gov (United States)

    Vernon, Daniel C

    2003-10-01

    A simple example of a nonequilibrium system for which fluctuations are important is a system of particles which diffuse and may annihilate in pairs on contact. The renormalization group can be used to calculate the time dependence of the density of particles, and provides both an exact value for the exponent governing the decay of particles and an epsilon expansion for the amplitude of this power law. When the diffusion is anomalous, as when the particles perform Lévy flights, the critical dimension depends continuously on the control parameter for the Lévy distribution. The epsilon expansion can then become an expansion in a small parameter. We present the renormalization group calculation and compare these results with those of a simulation.

  20. Singlet-triplet annihilation limits exciton yield in poly(3-hexylthiophene)

    CERN Document Server

    Steiner, Florian; Lupton, John M

    2014-01-01

    Control of chain length and morphology in combination with single-molecule spectroscopy techniques provide a comprehensive photophysical picture of excited-state losses in the prototypical conjugated polymer poly(3-hexylthiophene) (P3HT). A universal self-quenching mechanism is revealed, based on singlet-triplet exciton annihilation, which accounts for the dramatic loss in fluorescence quantum yield of a single P3HT chain between its solution (unfolded) and bulk-like (folded) state. Triplet excitons fundamentally limit the fluorescence of organic photovoltaic materials, which impacts on the conversion of singlet excitons to separated charge carriers, decreasing the efficiency of energy harvesting at high excitation densities. Interexcitonic interactions are so effective that a single P3HT chain of >100 kDa weight behaves like a two-level system, exhibiting perfect photon-antibunching.

  1. Positron annihilation study of microvoids in centrifugally atomized 304 stainless steel

    Science.gov (United States)

    Kim, J. Y.; Byrne, J. G.

    1993-03-01

    Positron trapping in microvoids was studied by positron-lifetime and positron Doppler line-shape measurements of centrifugally atomized 304 stainless-steel powder, which was hot-isostatically-press consolidated. This material contained a concentration of several times 1023/m3 of 1.5-nm-diam microvoids. Positron annihilation was strongly influenced by the microvoids in that a very long lifetime component τ3 of about 600 ps resulted. The intensity of the τ3 component decreased with decreasing number density of 1.5 nm microvoids. The Doppler peak shape was found to be much more strongly influenced by microvoids than by any other defects such as precipitates or grain boundaries. In particular microvoids produced significant narrowing of the Doppler distribution shape.

  2. Positron annihilation study of Fe-ion irradiated reactor pressure vessel model alloys

    Science.gov (United States)

    Chen, L.; Li, Z. C.; Schut, H.; Sekimura, N.

    2016-01-01

    The degradation of reactor pressure vessel steels under irradiation, which results from the hardening and embrittlement caused by a high number density of nanometer scale damage, is of increasingly crucial concern for safe nuclear power plant operation and possible reactor lifetime prolongation. In this paper, the radiation damage in model alloys with increasing chemical complexity (Fe, Fe-Cu, Fe-Cu-Si, Fe-Cu-Ni and Fe-Cu-Ni-Mn) has been studied by Positron Annihilation Doppler Broadening spectroscopy after 1.5 MeV Fe-ion implantation at room temperature or high temperature (290 oC). It is found that the room temperature irradiation generally leads to the formation of vacancy-type defects in the Fe matrix. The high temperature irradiation exhibits an additional annealing effect for the radiation damage. Besides the Cu-rich clusters observed by the positron probe, the results show formation of vacancy-Mn complexes for implantation at low temperatures.

  3. DM rate at NLO and the impact of SUSY-QCD-corrections to (co-)annihilation-processes on neutralino dark matter

    Energy Technology Data Exchange (ETDEWEB)

    Harz, Julia [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Herrmann, Bjoern [Laboratoire d' Annecy de Physique Theorique, Annecy-le-Vieux (France); Klasen, Michael; Meinecke, Moritz; Steppeler, Patrick [Institute of Theoretical Physics Muenster (Germany); Kovarik, Karol [Karlsruher Institut fuer Technologie (KIT), Karlsruhe (Germany); Le Boulc' h, Quentin [Laboratoire de Physique Subatomique et de Cosmologie, Grenoble (France)

    2013-07-01

    A powerful method to constrain the parameter space of theories beyond the Standard Model is to compare the predicted dark matter relic density with cosmological precision measurements, in particular with WMAP- and the upcoming Planck-data. On the particle physics side, the main uncertainty on the relic density arises from the (co-)annihilation cross sections of the dark matter particle. After a motivation for including higher order corrections in the prediction of the relic density, the DM rate at NLO-project will be presented, a software package that allows for the computation of the neutralino (co-)annihilation cross sections including SUSY-QCD corrections at the one-loop level and the evaluation of their effect on the relic density using a link to the public codes MicrOMEGAs and DarkSUSY. Recent results of the impact of SUSY-QCD corrections on the neutralino (co-)annihilation cross section as well as further ongoing projects in the context of the DM rate at NLO-project are discussed.

  4. The perception of probability.

    Science.gov (United States)

    Gallistel, C R; Krishan, Monika; Liu, Ye; Miller, Reilly; Latham, Peter E

    2014-01-01

    We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making.

  5. On thermal corrections to near-threshold co-annihilation

    CERN Document Server

    Kim, Seyong

    2016-01-01

    We consider non-relativistic "dark" particles interacting through gauge boson exchange. At finite temperature, gauge exchange is modified in many ways: virtual corrections lead to Debye screening; real corrections amount to frequent scatterings of the heavy particles on light plasma constituents; mixing angles change. In a certain temperature and energy range, these effects are of order unity. Taking them into account in a resummed form, we estimate the near-threshold spectrum of kinetically equilibrated co-annihilating TeV scale particles. Weakly bound states are shown to "melt" below freeze-out, whereas with attractive strong interactions, relevant e.g. for gluinos, bound states boost the co-annihilation rate by a factor 4...80 with respect to the Sommerfeld estimate, thereby perhaps helping to avoid overclosure of the universe. Modestly non-degenerate dark sector masses and a way to combine the contributions of channels with different gauge and spin structures are also discussed.

  6. Positron annihilation lifetime spectroscopy at a superconducting electron accelerator

    Science.gov (United States)

    Wagner, A.; Anwand, W.; Attallah, A. G.; Dornberg, G.; Elsayed, M.; Enke, D.; Hussein, A. E. M.; Krause-Rehberg, R.; Liedke, M. O.; Potzger, K.; Trinh, T. T.

    2017-01-01

    The Helmholtz-Zentrum Dresden-Rossendorf operates a superconducting linear accelerator for electrons with energies up to 35 MeV and average beam currents up to 1.6 mA. The electron beam is employed for production of several secondary beams including X-rays from bremsstrahlung production, neutrons, and positrons. The secondary positron beam after moderation feeds the Monoenergetic Positron Source (MePS) where positron annihilation lifetime (PALS) and positron annihilation Doppler-broadening experiments in materials science are performed in parallel. The adjustable repetition rate of the continuous-wave electron beams allows matching of the pulse separation to the positron lifetime in the sample under study. The energy of the positron beam can be set between 0.5 keV and 20 keV to perform depth resolved defect spectroscopy and porosity studies especially for thin films.

  7. Directional Dependence for Dark Matter Annihilation in Extreme Astrophysical Environments

    Science.gov (United States)

    Valadie, O. Grahm; Tinsley, Todd

    2017-01-01

    This research explores the directional dependence that extreme magnetic fields have on the annihilation of dark matter into electron-positron pairs. We take the neutralino of the Minimally Supersymmetric Standard Model (MSSM) as our dark matter candidate and assume magnetic field strengths on the order of the critical field (Bc 1013 G). This is characteristic of extreme astrophysical environments in which dark matter may accumulate. We will present the results for the annihilation cross section at varying incoming particle direction. In addition, we will present how these results differ with neutralino mass and energy, as well as with the magnetic field strength. Our goal is to demonstrate the ways that the direction of the magnetic field affects the states of the final electron and positron. This work is supported by NASA/Arkansas Space Grant Consortium and the Hendrix Odyssey Program.

  8. Rapid thermal co-annihilation through bound states

    CERN Document Server

    Kim, Seyong

    2016-01-01

    The co-annihilation rate of heavy particles close to thermal equilibrium, which plays a role in many classic dark matter scenarios, can be "simulated" in QCD by considering the pair annihilation rate of a heavy quark and antiquark at a temperature of a few hundred MeV. We show that the so-called Sommerfeld factors, parameterizing the rate, can be defined and measured non-perturbatively within the NRQCD framework. Lattice measurements indicate a modest suppression in the octet channel, in reasonable agreement with perturbation theory, and a large enhancement in the singlet channel, much above the perturbative prediction. We suggest that the additional enhancement originates from bound state formation and subsequent decay, omitted in previous estimates of thermal Sommerfeld factors, which were based on Boltzmann equations governing single-particle phase space distributions.

  9. Neutrino pair annihilation near accreting, stellar-mass black holes

    CERN Document Server

    Birkl, R; Janka, H T; Müller, E

    2006-01-01

    We investigate the energy-momentum deposition due to neutrino-antineutrino annihilation in the vicinity of axisymmetric, accreting black holes (BHs) by numerically ray-tracing neutrino trajectories in a Kerr space-time. Hyperaccreting stellar-mass BHs are widely considered as energy sources that can drive ultrarelativistic outflows with the potential to produce gamma-ray bursts. In contrast to earlier works, we provide an extensive and detailed parameter study of the influence of general relativistic (GR) effects and of different neutrinosphere geometries. These include idealized thin disks, tori, and spheres, or are constructed as non-selfgravitating equilibrium matter distributions for varied BH rotation. Considering isothermal neutrinospheres with the same temperature and surface area, we confirm previous results that compared to Newtonian calculations, GR effects increase the annihilation rate measured by an observer at infinity by a factor of 2 when the neutrinosphere is a disk. However, in case of a tor...

  10. On thermal corrections to near-threshold annihilation

    Science.gov (United States)

    Kim, Seyong; Laine, M.

    2017-01-01

    We consider non-relativistic ``dark'' particles interacting through gauge boson exchange. At finite temperature, gauge exchange is modified in many ways: virtual corrections lead to Debye screening; real corrections amount to frequent scatterings of the heavy particles on light plasma constituents; mixing angles change. In a certain temperature and energy range, these effects are of order unity. Taking them into account in a resummed form, we estimate the near-threshold spectrum of kinetically equilibrated annihilating TeV scale particles. Weakly bound states are shown to "melt" below freeze-out, whereas with attractive strong interactions, relevant e.g. for gluinos, bound states boost the annihilation rate by a factor 04... 8 with respect to the Sommerfeld estimate, thereby perhaps helping to avoid overclosure of the universe. Modestly non-degenerate dark sector masses and a way to combine the contributions of channels with different gauge and spin structures are also discussed.

  11. Positron annihilation studies of some charge transfer molecular complexes

    CERN Document Server

    El-Sayed, A; Boraei, A A A

    2000-01-01

    Positron annihilation lifetimes were measured for some solid charge transfer (CT) molecular complexes of quinoline compounds (2,6-dimethylquinoline, 6-methoxyquinoline, quinoline, 6-methylquinoline, 3-bromoquinoline and 2-chloro-4-methylquinoline) as electron donor and picric acid as an electron acceptor. The infrared spectra (IR) of the solid complexes clearly indicated the formation of the hydrogen-bonding CT-complexes. The annihilation spectra were analyzed into two lifetime components using PATFIT program. The values of the average and bulk lifetimes divide the complexes into two groups according to the non-bonding ionization potential of the donor (electron donating power) and the molecular weight of the complexes. Also, it is found that the ionization potential of the donors and molecular weight of the complexes have a conspicuous effect on the average and bulk lifetime values. The bulk lifetime values of the complexes are consistent with the formation of stable hydrogen-bonding CT-complexes as inferred...

  12. Dark matter annihilation bound from the diffuse gamma ray flux

    Energy Technology Data Exchange (ETDEWEB)

    Kachelriess, M.; /Norwegian U. Sci. Tech.; Serpico, P.D.; /Fermilab

    2007-07-01

    An upper limit on the total annihilation rate of dark matter (DM) has been recently derived from the observed atmospheric neutrino background. It is a very conservative upper bound based on the sole hypothesis that the DM annihilation products are the least detectable final states in the Standard Model (SM), neutrinos. Any other decay channel into SM particles would lead to stronger constraints. We show that comparable bounds are obtained for DM masses around the TeV scale by observations of the diffuse gamma ray flux by EGRET, because electroweak bremsstrahlung leads to non-negligible electromagnetic branching ratios, even if DM particles only couple to neutrinos at tree level. A better mapping and the partial resolution of the diffuse gamma-ray background into astrophysical sources by the GLAST satellite will improve this bound in the near future.

  13. Antiprotons from dark matter annihilation in the Galaxy: astrophysical uncertainties

    CERN Document Server

    Evoli, Carmelo; Grasso, Dario; Maccione, Luca; Ullio, Piero

    2011-01-01

    The latest years have seen steady progresses in WIMP dark matter (DM) searches, with hints of possible signals suggested by both direct and indirect detection experiments. Antiprotons can play a key role validating those interpretations since they are copiously produced by WIMP annihilations in the Galactic halo, and the secondary antiproton background produced by Cosmic Ray (CR) interactions is predicted with fair accuracy and matches the observed spectrum very well. Using the publicly available numerical DRAGON code, we reconsider antiprotons as a tool to constrain DM models discussing its power and limitations. We provide updated constraints on a wide class of annihilating DM models by comparing our predictions against the most up-to-date ap measurements, taking also into account the latest spectral information on the p, He and other CR nuclei fluxes. Doing that, we probe carefully the uncertainties associated to both secondary and DM originated antiprotons, by using a variety of distinctively different as...

  14. Positron annihilation lifetime spectroscopy source correction determination: A simulation study

    Science.gov (United States)

    Kanda, Gurmeet S.; Keeble, David J.

    2016-02-01

    Positron annihilation lifetime spectroscopy (PALS) can provide sensitive detection and identification of vacancy-related point defects in materials. These measurements are normally performed using a positron source supported, and enclosed by, a thin foil. Annihilation events from this source arrangement must be quantified and are normally subtracted from the spectrum before analysis of the material lifetime components proceeds. Here simulated PALS spectra reproducing source correction evaluation experiments have been systematically fitted and analysed using the packages PALSfit and MELT. Simulations were performed assuming a single lifetime material, and for a material with two lifetime components. Source correction terms representing a directly deposited source and various foil supported sources were added. It is shown that in principle these source terms can be extracted from suitably designed experiments, but that fitting a number of independent, nominally identical, spectra is recommended.

  15. Positron annihilation Doppler broadening study of Xe-implanted aluminum

    Energy Technology Data Exchange (ETDEWEB)

    Yu, R.S., E-mail: yursh@ihep.ac.cn [Key Laboratory of Nuclear Analysis Techniques, Institute of High Energy Physics, Chinese Academy of Sciences, No. 19 Yuquan Lu, Beijing 100049 (China); Maekawa, M.; Kawasuso, A. [Japan Atomic Energy Agency, Advanced Science Research Center, Watanuki 1233, Takasaki, Gunma 370-1292 (Japan); Wang, B.Y.; Wei, L. [Key Laboratory of Nuclear Analysis Techniques, Institute of High Energy Physics, Chinese Academy of Sciences, No. 19 Yuquan Lu, Beijing 100049 (China)

    2013-10-01

    Positron annihilation Doppler broadening measurements were conducted to characterize information of defects in 380 keV Xe{sup +}-implanted aluminum upon thermal annealing at temperatures ranging from 100 to 600 °C. The results suggest a broad distribution in the depth of vacancy-type defects in all the as-implanted samples. Meanwhile, with an increase in implantation dose the defect-rich region shifts toward the sample surface. It was found that increasing the annealing temperature triggers surface-directed migration and coalescence of vacancy and Xe{sub n}V{sub m} clusters in samples with implantation doses of 1E15 and 1E16 Xe{sup +}cm{sup −2}. In the sample implanted with a high dose of 1E17 Xe{sup +}cm{sup −2}, positron annihilation revealed a decomposition and even elimination of such defects under post-implantation annealing treatment.

  16. Dark matter annihilation radiation in hydrodynamic simulations of Milky Way haloes

    Science.gov (United States)

    Schaller, Matthieu; Frenk, Carlos S.; Theuns, Tom; Calore, Francesca; Bertone, Gianfranco; Bozorgnia, Nassim; Crain, Robert A.; Fattahi, Azadeh; Navarro, Julio F.; Sawala, Till; Schaye, Joop

    2016-02-01

    We obtain predictions for the properties of cold dark matter annihilation radiation using high-resolution hydrodynamic zoom-in cosmological simulations of Milky Way-like galaxies (APOSTLE project) carried out as part of the `Evolution and Assembly of GaLaxies and their Environments' (EAGLE) programme. Galactic haloes in the simulation have significantly different properties from those assumed in the `standard halo model' often used in dark matter detection studies. The formation of the galaxy causes a contraction of the dark matter halo, whose density profile develops a steeper slope than the Navarro-Frenk-White (NFW) profile between r ≈ 1.5 kpc and r ≈ 10 kpc. At smaller radii, r ≲ 1.5 kpc, the haloes develop a flatter than NFW slope. This unexpected feature may be specific to our particular choice of subgrid physics model but nevertheless the dark matter density profiles agree within 30 per cent as the mass resolution is increased by a factor 150. The inner regions of the haloes are almost perfectly spherical (axis ratios b/a > 0.97 within r = 1 kpc) and there is no offset larger than 45 pc between the centre of the stellar distribution and the centre of the dark halo. The morphology of the predicted dark matter annihilation radiation signal is in broad agreement with γ-ray observations at large Galactic latitudes (b ≳ 3°). At smaller angles, the inferred signal in one of our four galaxies is similar to that which is observed but it is significantly weaker in the other three.

  17. On Sommerfeld enhancement of Dark Matter Annihilation

    CERN Document Server

    Hannestad, Steen

    2010-01-01

    In the last few years there has been some interest in WIMP Dark Matter models featuring a velocity dependent cross section through the Sommerfeld enhancement mechanism. The idea is to have light bosons mediate a force between the WIMPs, which gives rise to a Yukawa-potential. In the first part of this article, we analyse the Sommerfeld enhancement in detail. We find analytic expressions for the boost factor for three different modelpotentials, Coulomb, the spherical well and the spherical cone well and compare with the numerical solution in the Yukawa case. In the second part of the article, we perform a detailed computation of the Dark Matter relic density for models having Sommerfeld enhancement by solving the Boltzmann equation numerically. As an application we compare the expected distortions of the CMB blackbody spectrum to the bounds set by FIRAS.

  18. Precision Measurements in Electron-Positron Annihilation: Theory and Experiment

    CERN Document Server

    Chetyrkin, Konstantin

    2016-01-01

    Theory results on precision measurements in electron-positron annihilation at low and high energies are collected. These cover pure QCD calculations as well as mixed electroweak and QCD results, involving light and heavy quarks. The impact of QCD corrections on the $W$-boson mass is discussed and, last not least, the status and the perspectives for the Higgs boson decay rate into $b\\bar b$, $c\\bar c$ and into two gluons.

  19. Neutralino annihilation into massive quarks with supersymmetric QCD corrections

    Science.gov (United States)

    Herrmann, Björn; Klasen, Michael; Kovařík, Karol

    2009-03-01

    We compute the full O(αs) supersymmetric (SUSY)-QCD corrections for neutralino annihilation into massive quarks through gauge or Higgs bosons and squarks in the minimal supersymmetric standard model, including the known resummation of logarithmically enhanced terms. The numerical impact of the corrections on the extraction of SUSY mass parameters from cosmological data is analyzed for gravity-mediated SUSY-breaking scenarios and shown to be sizable, so that these corrections must be included in common analysis tools.

  20. Neutralino Annihilation into Massive Quarks with SUSY-QCD Corrections

    CERN Document Server

    Herrmann, Björn; Kovarik, Karol

    2009-01-01

    We compute the full O(alpha_s) supersymmetric (SUSY) QCD corrections for neutralino annihilation into massive quarks through gauge or Higgs bosons and squarks in the Minimal Supersymmetric Standard Model (MSSM), including the known resummation of logarithmically enhanced terms. The numerical impact of the corrections on the extraction of SUSY mass parameters from cosmological data is analyzed for gravity-mediated SUSY breaking scenarios and shown to be sizable, so that these corrections must be included in common analysis tools.

  1. Antiproton-neon annihilation at 57 MeV/c

    CERN Document Server

    Bianconi, A; Bussa, M P; Lodi-Rizzini, E; Venturelli, L; Zenoni, A; Pontecorvo, G B; Guaraldo, C; Balestra, F; Busso, L; Colantoni, M L; Ferrero, A; Ferrero, L; Grasso, A; Maggiora, A; Maggiora, M G; Piragino, G; Tosello, F

    2000-01-01

    The pNe annihilation cross section is measured for the first time in the momentum interval (53/63) MeV/c. About 9000 pictures collected by the Streamer Chamber Collaboration (PS179) at LEAR-CERN have been scanned. Four events are found, corresponding to sigma /sub ann /=2210+or-1105 mb. The result is compared to the set of measurements presently available in the region of low p momentum. (18 refs).

  2. The three-jet rate in \\(e^{+}e^{-}\\) annihilation

    CERN Document Server

    Lovett-Turner, C N

    1994-01-01

    Abstract: Progress has been made on the calculation of \\(R_{3}\\), the three-jet rate in \\(e^{+}e^{-}\\) annihilation, in the \\(k_\\bot \\) (Durham) scheme. Using the coherent branching formalism \\cite{b,c,d}, an explicit expression for \\(R_{3}\\) is calculated. In this, leading and next-to-leading large logarithms (LL and NLL) are resummed to all orders in QCD perturbation theory. In addition to exponentials an error function is involved.

  3. Positron annihilation studies on Amberlite XAD-4 adsorbed with nitrobenzene

    Science.gov (United States)

    Sudarshan, K.; Pujari, P. K.; Goswami, A.

    2006-11-01

    XAD-4 resin was loaded with various amounts of nitrobenzene which is known to be a strong chemical quencher of positronium. The positron annihilation lifetime studies showed a continuous decrease in the o-Ps intensity with increasing nitrobenzene loading. No appreciable change in the o-Ps lifetime was observed. This suggests that the nitrobenzene acts as inhibitor rather than quencher in the current case. Possible reasons for the manifestations of such powerful quencher as an inhibitor are discussed.

  4. Generating X-ray lines from annihilating dark matter

    CERN Document Server

    Dudas, Emilian; Mambrini, Yann

    2014-01-01

    We propose different scenarios where a keV dark matter annihilates to produce a monochromatic signal. The process is generated through the exchange of a light scalar of mass of order 300 keV - 50 MeV coupling to photon through loops or higher dimensional operators. For natural values of the couplings and scales, the model can generate a gamma-ray line which can fit with the recently identified 3.5 keV X-ray line.

  5. Saturation of low-energy antiproton annihilation on nuclei

    Science.gov (United States)

    Gal, A.; Friedman, E.; Batty, C. J.

    2000-10-01

    Recent measurements of very low-energy (pL0, parallels the recent prediction, for /E<0, that the level widths of /p¯ atoms saturate and, hence, that /p¯ deeply bound atomic states are relatively narrow. Antiproton annihilation cross sections are calculated at pL=57 MeV//c across the periodic table, and their dependence on /Z and /A is classified and discussed with respect to the Coulomb focussing effect at very low energies.

  6. Experimental Probability in Elementary School

    Science.gov (United States)

    Andrew, Lane

    2009-01-01

    Concepts in probability can be more readily understood if students are first exposed to probability via experiment. Performing probability experiments encourages students to develop understandings of probability grounded in real events, as opposed to merely computing answers based on formulae.

  7. The pleasures of probability

    CERN Document Server

    Isaac, Richard

    1995-01-01

    The ideas of probability are all around us. Lotteries, casino gambling, the al­ most non-stop polling which seems to mold public policy more and more­ these are a few of the areas where principles of probability impinge in a direct way on the lives and fortunes of the general public. At a more re­ moved level there is modern science which uses probability and its offshoots like statistics and the theory of random processes to build mathematical descriptions of the real world. In fact, twentieth-century physics, in embrac­ ing quantum mechanics, has a world view that is at its core probabilistic in nature, contrary to the deterministic one of classical physics. In addition to all this muscular evidence of the importance of probability ideas it should also be said that probability can be lots of fun. It is a subject where you can start thinking about amusing, interesting, and often difficult problems with very little mathematical background. In this book, I wanted to introduce a reader with at least a fairl...

  8. Defects in Si-Rich SiO2 Films Prepared by Radio-Frequency Magnetron Co-sputtering Using Variable Energy Positron Annihilation Spectroscopy

    Institute of Scientific and Technical Information of China (English)

    HAO Xiao-Peng; ZHOU Chun-Lan; WANG Bao-Yi; WEI Long

    2009-01-01

    @@ Si-rich SiO2 films prepared by rf magnetron co-sputtering method are studied by slow positron beams. The nega-tively charge point defects (probably Pb centres or peroxy radicals) at the silicon nanocluster (nc-Si)/SiO2 interface are observed by Doppler broadening spectra. Coincidence Doppler-broadening spectra show that positrons have a higher annihilation probability with core electrons nearby oxygen atoms than silicon atoms. The formation of N-related bonds may be the reason for the prevention of the migration reaction of Si and O atoms, hence nc-Si formation is inhibited by annealing in nitrogen compared to in vacuum.

  9. Dynamical effects of annihilation in pair-dominated winds

    Science.gov (United States)

    Becker, Peter A.; Begelman, Mitchell C.

    1990-01-01

    The steady, spherically symmetric flow of an ideal fluid dominated by photons and ultrarelativistic electron-positron pairs is analyzed. A new wind equation and a set of critical point conditions are obtained which describe the relativistic flow of an annihilation gas in which the flow velocity exceeds the diffusion velocity of the photons. Numerical results are reported which suggest the possible existence of trapped, pure-pair winds driven by the combined pressure of the pairs and the photons. Most of the annihilation occurs below the critical radius in trapped flows, and a substantial fraction of the total energy of the injected pairs is converted into kinetic energy and radiation. Accurate numerical solutions for the flow velocity and the positron loss rate in optically thin, Newtonian winds are obtained, and a useful approximate analytic relation between the positron loss rate and the flow velocity is derived which suggests that a large number of pairs may survive the annihilation region, ultimately escaping the potential well.

  10. Rapid thermal co-annihilation through bound states in QCD

    Science.gov (United States)

    Kim, Seyong; Laine, M.

    2016-07-01

    The co-annihilation rate of heavy particles close to thermal equilibrium, which plays a role in many classic dark matter scenarios, can be "simulated" in QCD by considering the pair annihilation rate of a heavy quark and antiquark at a temperature of a few hundred MeV. We show that the so-called Sommerfeld factors, parameterizing the rate, can be defined and measured non-perturbatively within the NRQCD framework. Lattice measurements indicate a modest suppression in the octet channel, in reasonable agreement with perturbation theory, and a large enhancement in the singlet channel, much above the perturbative prediction. The additional enhancement is suggested to originate from bound state formation and subsequent decay. Making use of a Green's function based method to incorporate thermal corrections in perturbative co-annihilation rate computations, we show that qualitative agreement with lattice data can be found once thermally broadened bound states are accounted for. We suggest that our formalism may also be applicable to specific dark matter models which have complicated bound state structures.

  11. Dark matter annihilation via Higgs and gamma-ray channels

    Science.gov (United States)

    Chan, Man Ho

    2016-09-01

    Recent studies show that the GeV gamma-ray excess signal from the Milky Way center can be best explained by ˜ 40 GeV dark matter annihilating via bbar{b} channel. However, the recent observations of the nearby Milky Way dwarf spheroidal satellite galaxies by Fermi-LAT and the radio observations of the Milky Way center and the M31 galaxy tend to rule out this proposal. In this article, we discuss the possibility of the dark matter interpretation of the GeV gamma-ray excess by proposing 130 GeV dark matter annihilating via both Higgs and gamma-ray channels. Recent analyses show that dark matter annihilating via Higgs channel can satisfactorily explain the Milky Way GeV gamma-ray excess observed. We show that this model can satisfy the upper limits of the gamma-ray constraint of the Milky Way dwarf spheroidal satellite galaxies and the constraint from the radio observations of the M31 galaxy.

  12. CALET's sensitivity to Dark Matter annihilation in the galactic halo

    Science.gov (United States)

    Motz, H.; Asaoka, Y.; Torii, S.; Bhattacharyya, S.

    2015-12-01

    CALET (Calorimetric Electron Telescope), installed on the ISS in August 2015, directly measures the electron+positron cosmic rays flux up to 20 TeV. With its proton rejection capability of 1 : 105 and an aperture of 1200 cm2· sr, it will provide good statistics even well above one TeV, while also featuring an energy resolution of 2%, which allows it to detect fine structures in the spectrum. Such structures may originate from Dark Matter annihilation or decay, making indirect Dark Matter search one of CALET's main science objectives among others such as identification of signatures from nearby supernova remnants, study of the heavy nuclei spectra and gamma astronomy. The latest results from AMS-02 on positron fraction and total electron+positron flux can be fitted with a parametrization including a single pulsar as an extra power law source with exponential cut-off, which emits an equal amount of electrons and positrons. This single pulsar scenario for the positron excess is extrapolated into the TeV region and the expected CALET data for this case are simulated. Based on this prediction for CALET data, the sensitivity of CALET to Dark Matter annihilation in the galactic halo has been calculated. It is shown that CALET could significantly improve the limits compared to current data, especially for those Dark Matter candidates that feature a large fraction of annihilation directly into e+ + e-, such as the LKP (Lightest Kaluza-Klein particle).

  13. Probabilities from Envariance

    CERN Document Server

    Zurek, W H

    2004-01-01

    I show how probabilities arise in quantum physics by exploring implications of {\\it environment - assisted invariance} or {\\it envariance}, a recently discovered symmetry exhibited by entangled quantum systems. Envariance of perfectly entangled states can be used to rigorously justify complete ignorance of the observer about the outcome of any measurement on either of the members of the entangled pair. Envariance leads to Born's rule, $p_k \\propto |\\psi_k|^2$. Probabilities derived in this manner are an objective reflection of the underlying state of the system -- they reflect experimentally verifiable symmetries, and not just a subjective ``state of knowledge'' of the observer. Envariance - based approach is compared with and found superior to the key pre-quantum definitions of probability including the {\\it standard definition} based on the `principle of indifference' due to Laplace, and the {\\it relative frequency approach} advocated by von Mises. Implications of envariance for the interpretation of quantu...

  14. Quantum beats in the 3{gamma} annihilation decay of Positronium observed by perturbed angular distribution

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, Eugeniu [National Institute for Nuclear Physics and Engineering - Horia Hulubei, Bucharest - Magurele, CP MG 06, Atomistilor Street 407 (Romania); Center for Advanced Studies in Physics of the Roumanian Academy, Casa Academiei Romane, Calea 13 Septembrie No: 13, Bucharest (Romania); Vata, Ion [National Institute for Nuclear Physics and Engineering - Horia Hulubei, Bucharest - Magurele, CP MG 06, Atomistilor Street 407 (Romania)], E-mail: vata@ifin.nipne.ro; Dudu, Dorin; Rusen, Ion; Stefan, Nitisor [National Institute for Nuclear Physics and Engineering - Horia Hulubei, Bucharest - Magurele, CP MG 06, Atomistilor Street 407 (Romania)

    2008-10-31

    We have applied conventional Time Differential Perturbed Angular Correlation (TDPAC) method to observe the anisotropy oscillations in the 3{gamma} annihilation decay of polarized Positronium in a weak magnetic field. The effect, as predicted theoretically and experimentally demonstrated by Barishevsky et al. [V.G. Barishevsky, O.N. Metelitsa, V.V. Tikhomirov, Oscillations of the positronium decay {gamma}-quantum angular distribution in a magnetic field, J. Phys. B: At. Mol. Opt. Phys.22 (1989) 2835], is induced by the coherent admixture of the m = 0 states of ortho-Positronium (o-Ps) and para-Positronium (p-Ps) in interaction with the magnetic field. The following experimental characteristics are to be considered: (i)the oscillation frequency corresponds to the difference in energy of the Ps atom levels in magnetic field and is proportional with H{sup 2}; (ii)in a fixed geometry the modulation depth (oscillations amplitude) depends on the mean positron polarization; (iii)privileged angles of the polarization vector, magnetic field and detectors are required for optimizing the observed oscillations amplitude. The normalized difference spectrum function (R(t)) obtained from time spectra measured in vacuum and in different gaseous atmospheres (Ar, H{sub 2}, N{sub 2}) have the oscillations amplitude constant and we conclude that the Ps atoms are not fully thermalized over a time interval of about 400 ns. The R(t) functions obtained for o-Ps annihilation decays, in dry air or Ar-O mixture, have the oscillations amplitude time dependent due, probably, to the paramagnetism of the Oxygen molecules.

  15. Introduction to imprecise probabilities

    CERN Document Server

    Augustin, Thomas; de Cooman, Gert; Troffaes, Matthias C M

    2014-01-01

    In recent years, the theory has become widely accepted and has been further developed, but a detailed introduction is needed in order to make the material available and accessible to a wide audience. This will be the first book providing such an introduction, covering core theory and recent developments which can be applied to many application areas. All authors of individual chapters are leading researchers on the specific topics, assuring high quality and up-to-date contents. An Introduction to Imprecise Probabilities provides a comprehensive introduction to imprecise probabilities, includin

  16. Negative Probabilities and Contextuality

    CERN Document Server

    de Barros, J Acacio; Oas, Gary

    2015-01-01

    There has been a growing interest, both in physics and psychology, in understanding contextuality in experimentally observed quantities. Different approaches have been proposed to deal with contextual systems, and a promising one is contextuality-by-default, put forth by Dzhafarov and Kujala. The goal of this paper is to present a tutorial on a different approach: negative probabilities. We do so by presenting the overall theory of negative probabilities in a way that is consistent with contextuality-by-default and by examining with this theory some simple examples where contextuality appears, both in physics and psychology.

  17. Classic Problems of Probability

    CERN Document Server

    Gorroochurn, Prakash

    2012-01-01

    "A great book, one that I will certainly add to my personal library."—Paul J. Nahin, Professor Emeritus of Electrical Engineering, University of New Hampshire Classic Problems of Probability presents a lively account of the most intriguing aspects of statistics. The book features a large collection of more than thirty classic probability problems which have been carefully selected for their interesting history, the way they have shaped the field, and their counterintuitive nature. From Cardano's 1564 Games of Chance to Jacob Bernoulli's 1713 Golden Theorem to Parrondo's 1996 Perplexin

  18. Choice probability generating functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel

    2010-01-01

    This paper establishes that every random utility discrete choice model (RUM) has a representation that can be characterized by a choice-probability generating function (CPGF) with specific properties, and that every function with these specific properties is consistent with a RUM. The choice...... probabilities from the RUM are obtained from the gradient of the CPGF. Mixtures of RUM are characterized by logarithmic mixtures of their associated CPGF. The paper relates CPGF to multivariate extreme value distributions, and reviews and extends methods for constructing generating functions for applications...

  19. Indirect Probes of Dark Matter and Globular Cluster Properties From Dark Matter Annihilation within the Coolest White Dwarfs

    CERN Document Server

    Hurst, Travis J; Natarajan, Aravind; Badenes, Carles

    2014-01-01

    White Dwarfs (WD) capture Dark Matter (DM) as they orbit within their host halos. These captured particles may subsequently annihilate, heating the stellar core and preventing the WD from cooling. The potential wells of WDs are considerably deeper and core temperatures significantly cooler than those of main sequence stars. Consequently, DM evaporation is less important in WDs and DM with masses $M_{\\chi} \\gtrsim 100\\, \\kev$ and annihilation cross-sections orders of magnitude below the canonical thermal cross-section ($\\sigmav \\gtrsim 10^{-46}\\, \\cm^3$/s) can significantly alter WD cooling in particular astrophysical environments. We consider WDs in globular clusters (GCs) and dwarf galaxies. If the parameters of the DM particle are known, then the temperature of the coolest WD in a GC can be used to constrain the DM density of the cluster's halo (potentially even ruling out the presence of a halo if the inferred density is of order the ambient Galactic density). Recently several direct detection experiments ...

  20. Search for a Dark Matter Annihilation Signal from the Galactic Center Halo with H.E.S.S.

    Science.gov (United States)

    Abramowski, A.; Acero, F.; Aharonian, F.; Akhperjanian, A. G.; Anton, G.; Barnacka, A.; Barres de Almeida, U.; Bazer-Bachi, A. R.; Becherini, Y.; Becker, J.; Behera, B.; Bernlöhr, K.; Bochow, A.; Boisson, C.; Bolmont, J.; Bordas, P.; Borrel, V.; Brucker, J.; Brun, F.; Brun, P.; Bulik, T.; Büsching, I.; Carrigan, S.; Casanova, S.; Cerruti, M.; Chadwick, P. M.; Charbonnier, A.; Chaves, R. C. G.; Cheesebrough, A.; Chounet, L.-M.; Clapson, A. C.; Coignet, G.; Conrad, J.; Dalton, M.; Daniel, M. K.; Davids, I. D.; Degrange, B.; Deil, C.; Dickinson, H. J.; Djannati-Ataï, A.; Domainko, W.; Drury, L. O.'C.; Dubois, F.; Dubus, G.; Dyks, J.; Dyrda, M.; Egberts, K.; Eger, P.; Espigat, P.; Fallon, L.; Farnier, C.; Fegan, S.; Feinstein, F.; Fernandes, M. V.; Fiasson, A.; Fontaine, G.; Förster, A.; Füßling, M.; Gallant, Y. A.; Gast, H.; Gérard, L.; Gerbig, D.; Giebels, B.; Glicenstein, J. F.; Glück, B.; Goret, P.; Göring, D.; Hague, J. D.; Hampf, D.; Hauser, M.; Heinz, S.; Heinzelmann, G.; Henri, G.; Hermann, G.; Hinton, J. A.; Hoffmann, A.; Hofmann, W.; Hofverberg, P.; Horns, D.; Jacholkowska, A.; de Jager, O. C.; Jahn, C.; Jamrozy, M.; Jung, I.; Kastendieck, M. A.; Katarzyński, K.; Katz, U.; Kaufmann, S.; Keogh, D.; Kerschhaggl, M.; Khangulyan, D.; Khélifi, B.; Klochkov, D.; Kluźniak, W.; Kneiske, T.; Komin, Nu.; Kosack, K.; Kossakowski, R.; Laffon, H.; Lamanna, G.; Lennarz, D.; Lohse, T.; Lopatin, A.; Lu, C.-C.; Marandon, V.; Marcowith, A.; Masbou, J.; Maurin, D.; Maxted, N.; McComb, T. J. L.; Medina, M. C.; Méhault, J.; Moderski, R.; Moulin, E.; Naumann, C. L.; Naumann-Godo, M.; de Naurois, M.; Nedbal, D.; Nekrassov, D.; Nguyen, N.; Nicholas, B.; Niemiec, J.; Nolan, S. J.; Ohm, S.; Olive, J.-F.; de Oña Wilhelmi, E.; Opitz, B.; Ostrowski, M.; Panter, M.; Paz Arribas, M.; Pedaletti, G.; Pelletier, G.; Petrucci, P.-O.; Pita, S.; Pühlhofer, G.; Punch, M.; Quirrenbach, A.; Raue, M.; Rayner, S. M.; Reimer, A.; Reimer, O.; Renaud, M.; de Los Reyes, R.; Rieger, F.; Ripken, J.; Rob, L.; Rosier-Lees, S.; Rowell, G.; Rudak, B.; Rulten, C. B.; Ruppel, J.; Ryde, F.; Sahakian, V.; Santangelo, A.; Schlickeiser, R.; Schöck, F. M.; Schönwald, A.; Schwanke, U.; Schwarzburg, S.; Schwemmer, S.; Shalchi, A.; Sikora, M.; Skilton, J. L.; Sol, H.; Spengler, G.; Stawarz, Ł.; Steenkamp, R.; Stegmann, C.; Stinzing, F.; Sushch, I.; Szostek, A.; Tavernet, J.-P.; Terrier, R.; Tibolla, O.; Tluczykont, M.; Valerius, K.; van Eldik, C.; Vasileiadis, G.; Venter, C.; Vialle, J. P.; Viana, A.; Vincent, P.; Vivier, M.; Völk, H. J.; Volpe, F.; Vorobiov, S.; Vorster, M.; Wagner, S. J.; Ward, M.; Wierzcholska, A.; Zajczyk, A.; Zdziarski, A. A.; Zech, A.; Zechlin, H.-S.

    2011-04-01

    A search for a very-high-energy (VHE; ≥100GeV) γ-ray signal from self-annihilating particle dark matter (DM) is performed towards a region of projected distance r˜45-150pc from the Galactic center. The background-subtracted γ-ray spectrum measured with the High Energy Stereoscopic System (H.E.S.S.) γ-ray instrument in the energy range between 300 GeV and 30 TeV shows no hint of a residual γ-ray flux. Assuming conventional Navarro-Frenk-White and Einasto density profiles, limits are derived on the velocity-weighted annihilation cross section ⟨σv⟩ as a function of the DM particle mass. These are among the best reported so far for this energy range and in particular differ only little between the chosen density profile parametrizations. In particular, for the DM particle mass of ˜1TeV, values for ⟨σv⟩ above 3×10-25cm3s-1 are excluded for the Einasto density profile.

  1. Search for a Dark Matter annihilation signal from the Galactic Center halo with H.E.S.S

    CERN Document Server

    Abramowski, A; Aharonian, F; Akhperjanian, A G; Anton, G; Barnacka, A; de Almeida, U Barres; Bazer-Bachi, A R; Becherini, Y; Becker, J; Behera, B; Bernlöhr, K; Bochow, A; Boisson, C; Bolmont, J; Bordas, P; Borrel, V; Brucker, J; Brun, F; Brun, P; Bulik, T; Büsching, I; Carrigan, S; Casanova, S; Cerruti, M; Chadwick, P M; Charbonnier, A; Chaves, R C G; Cheesebrough, A; Chounet, L -M; Clapson, A C; Coignet, G; Conrad, J; Dalton, M; Daniel, M K; Davids, I D; Degrange, B; Deil, C; Dickinson, H J; Djannati-Ataï, A; Domainko, W; Drury, L O'C; Dubois, F; Dubus, G; Dyks, J; Dyrda, M; Egberts, K; Eger, P; Espigat, P; Fallon, L; Farnier, C; Fegan, S; Feinstein, F; Fernandes, M V; Fiasson, A; Fontaine, G; Förster, A; Füßling, M; Gallant, Y A; Gast, H; Gérard, L; Gerbig, D; Giebels, B; Glicenstein, J F; Glück, B; Goret, P; Göring, D; Hague, J D; Hampf, D; Hauser, M; Heinz, S; Heinzelmann, G; Henri, G; Hermann, G; Hinton, J A; Hoffmann, A; Hofmann, W; Hofverberg, P; Horns, D; Jacholkowska, A; de Jager, O C; Jahn, C; Jamrozy, M; Jung, I; Kastendieck, M A; Katarzynski, K; Katz, U; Kaufmann, S; Keogh, D; Kerschhaggl, M; Khangulyan, D; Khélifi, B; Klochkov, D; Kluźniak, W; Kneiske, T; Komin, Nu; Kosack, K; Kossakowski, R; Laffon, H; Lamanna, G; Lennarz, D; Lohse, T; Lopatin, A; Lu, C -C; Marandon, V; Marcowith, A; Masbou, J; Maurin, D; Maxted, N; McComb, T J L; Medina, M C; Méhault, J; Moderski, R; Moulin, E; Naumann, C L; Naumann-Godo, M; de Naurois, M; Nedbal, D; Nekrassov, D; Nguyen, N; Nicholas, B; Niemiec, J; Nolan, S J; Ohm, S; Olive, J-F; Wilhelmi, E de Oña; Opitz, B; Ostrowski, M; Panter, M; Arribas, M Paz; Pedaletti, G; Pelletier, G; Petrucci, P -O; Pita, S; Pühlhofer, G; Punch, M; Quirrenbach, A; Raue, M; Rayner, S M; Reimer, A; Reimer, O; Renaud, M; Reyes, R de los; Rieger, F; Ripken, J; Rob, L; Rosier-Lees, S; Rowell, G; Rudak, B; Rulten, C B; Ruppel, J; Ryde, F; Sahakian, V; Santangelo, A; Schlickeiser, R; Schöck, F M; Schönwald, A; Schwanke, U; Schwarzburg, S; Schwemmer, S; Shalchi, A; Sikora, M; Skilton, J L; Sol, H; Spengler, G; Stawarz, Ł; Steenkamp, R; Stegmann, C; Stinzing, F; Sushch, I; Szostek, A; Tavernet, J -P; Terrier, R; Tibolla, O; Tluczykont, M; Valerius, K; van Eldik, C; Vasileiadis, G; Venter, C; Vialle, J P; Viana, A; Vincent, P; Vivier, M; Völk, H J; Volpe, F; Vorobiov, S; Vorster, M; Wagner, S J; Ward, M; Wierzcholska, A; Zajczyk, A; Zdziarski, A A; Zech, A; Zechlin, H -S

    2011-01-01

    A search for a very-high-energy (VHE; >= 100 GeV) gamma-ray signal from self-annihilating particle Dark Matter (DM) is performed towards a region of projected distance r ~ 45-150 pc from the Galactic Center. The background-subtracted gamma-ray spectrum measured with the High Energy Stereoscopic System (H.E.S.S.) gamma-ray instrument in the energy range between 300 GeV and 30 TeV shows no hint of a residual gamma-ray flux. Assuming conventional Navarro-Frenk-White (NFW) and Einasto density profiles, limits are derived on the velocity-weighted annihilation cross section as a function of the DM particle mass. These are among the best reported so far for this energy range. In particular, for the DM particle mass of ~1 TeV, values for above 3 * 10^(-25) cm^3 s^(-1) are excluded for the Einasto density profile. The limits derived here differ much less for the chosen density profile parametrizations, as opposed to limits from gamma-ray observations of dwarf galaxies or the very center of the Milky Way, where the d...

  2. Counterexamples in probability

    CERN Document Server

    Stoyanov, Jordan M

    2013-01-01

    While most mathematical examples illustrate the truth of a statement, counterexamples demonstrate a statement's falsity. Enjoyable topics of study, counterexamples are valuable tools for teaching and learning. The definitive book on the subject in regards to probability, this third edition features the author's revisions and corrections plus a substantial new appendix.

  3. Epistemology and Probability

    CERN Document Server

    Plotnitsky, Arkady

    2010-01-01

    Offers an exploration of the relationships between epistemology and probability in the work of Niels Bohr, Werner Heisenberg, and Erwin Schrodinger; in quantum mechanics; and in modern physics. This book considers the implications of these relationships and of quantum theory for our understanding of the nature of thinking and knowledge in general

  4. Varga: On Probability.

    Science.gov (United States)

    Varga, Tamas

    This booklet resulted from a 1980 visit by the author, a Hungarian mathematics educator, to the Teachers' Center Project at Southern Illinois University at Edwardsville. Included are activities and problems that make probablility concepts accessible to young children. The topics considered are: two probability games; choosing two beads; matching…

  5. On Probability Domains

    Science.gov (United States)

    Frič, Roman; Papčo, Martin

    2010-12-01

    Motivated by IF-probability theory (intuitionistic fuzzy), we study n-component probability domains in which each event represents a body of competing components and the range of a state represents a simplex S n of n-tuples of possible rewards-the sum of the rewards is a number from [0,1]. For n=1 we get fuzzy events, for example a bold algebra, and the corresponding fuzzy probability theory can be developed within the category ID of D-posets (equivalently effect algebras) of fuzzy sets and sequentially continuous D-homomorphisms. For n=2 we get IF-events, i.e., pairs ( μ, ν) of fuzzy sets μ, ν∈[0,1] X such that μ( x)+ ν( x)≤1 for all x∈ X, but we order our pairs (events) coordinatewise. Hence the structure of IF-events (where ( μ 1, ν 1)≤( μ 2, ν 2) whenever μ 1≤ μ 2 and ν 2≤ ν 1) is different and, consequently, the resulting IF-probability theory models a different principle. The category ID is cogenerated by I=[0,1] (objects of ID are subobjects of powers I X ), has nice properties and basic probabilistic notions and constructions are categorical. For example, states are morphisms. We introduce the category S n D cogenerated by Sn=\\{(x1,x2,ldots ,xn)in In;sum_{i=1}nxi≤ 1\\} carrying the coordinatewise partial order, difference, and sequential convergence and we show how basic probability notions can be defined within S n D.

  6. Quark Annihilation and Lepton Formation versus Pair Production and Neutrino Oscillation: The Fourth Generation of Leptons

    Directory of Open Access Journals (Sweden)

    Zhang T. X.

    2011-04-01

    Full Text Available The emergence or formation of leptons from particles composed of quarks is still remained very poorly understood. In this paper, we propose that leptons are formed by quark-antiquark annihilations. There are two types of quark-antiquark annihilations. Type-I quark-antiquark annihilation annihilates only color charges, which is an incomplete annihilation and forms structureless and colorless but electrically charged leptons such as electron, muon, and tau particles. Type-II quark-antiquark annihilation annihilates both electric and color charges, which is a complete annihilation and forms structureless, colorless, and electrically neutral leptons such as electron, muon, and tau neutrinos. Analyzing these two types of annihilations between up and down quarks and antiquarks with an excited quantum state for each of them, we predict the fourth generation of leptons named lambda particle and neutrino. On the contrary quark-antiquark annihilation, a lepton particle or neutrino, when it collides, can be disintegrated into a quark-antiquark pair. The disintegrated quark-antiquark pair, if it is excited and/or changed in flavor during the collision, will annihilate into another type of lepton particle or neutrino. This quark-antiquark annihilation and pair production scenario provides unique understanding for the formation of leptons, predicts the fourth generation of leptons, and explains the oscillation of neutrinos without hurting the standard model of particle physics. With this scenario, we can understand the recent OPERA measurement of a tau particle in a muon neutrino beam as well as the early measurements of muon particles in electron neutrino beams.

  7. Negative probability in the framework of combined probability

    OpenAIRE

    Burgin, Mark

    2013-01-01

    Negative probability has found diverse applications in theoretical physics. Thus, construction of sound and rigorous mathematical foundations for negative probability is important for physics. There are different axiomatizations of conventional probability. So, it is natural that negative probability also has different axiomatic frameworks. In the previous publications (Burgin, 2009; 2010), negative probability was mathematically formalized and rigorously interpreted in the context of extende...

  8. Deduction and Validation of an Eulerian-Eulerian Model for Turbulent Dilute Two-Phase Flows by Means of the Phase Indicator Function---Disperse Elements* Probability Density Function

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A statistical formalism overcoming some conceptualand practical difficulties arising in existing two-phase flow (2PHF)mathematical modelling has been applied to propose a model for dilute2PHF turbulent flows. Phase interaction terms with a clear physical meaning enterthe equations andthe formalism provides some guidelines for the avoidance of closure assumptions orthe rational approximation of these terms. Continuous phase averaged continuity,momentum, turbulent kinetic energy and turbulencedissipation rate equations have been rigorously andsystematically obtained in a single step.These equations display a structure similar to that forsingle-phase flows. It is also assumed thatdispersed phase dynamics is well described by a probability densityfunction (pdf) equation and Eulerian continuity,momentum and fluctuating kinetic energy equations for the dispersedphase are deduced. Anextension of the standard k- turbulencemodel for the continuous phase is used. A gradient transport model is adopted forthe dispersedphase fluctuating fluxes of momentum and kinetic energy at the non-colliding, largeinertia limit. This model is thenused to predict the behaviour of three axisymmetric turbulent jets of air laden withsolid particlesvarying in size and concentration. Qualitative and quantitative numericalpredictions comparereasonably well with the three different sets of experimental results, studying theinfluence ofparticle size, loading ratio and flow confinement velocity.

  9. Application of positron annihilation lifetime technique for {gamma}-irradiation stresses study in chalcogenide vitreous semiconductors

    Energy Technology Data Exchange (ETDEWEB)

    Shpotyuk, O.; Golovchak, R.; Kovalskiy, A. [Scientific Research Company ' ' Carat' ' , Stryjska str. 20279031 Lviv (Ukraine); Filipecki, J.; Hyla, M. [Physics Institute, Pedagogical University, Al. Armii Krajowej 13/1542201 Czestochowa (Poland)

    2002-08-01

    The influence of {gamma}-irradiation on the positron annihilation lifetime spectra in chalcogenide vitreous semiconductors of As-Ge-S system has been analysed. The correlations between lifetime data, structural features and chemical compositions of glasses have been discussed. The observed lifetime components are connected with bulk positron annihilation and positron annihilation on various native and {gamma}-induced open volume defects. It is concluded that after {gamma}-irradiation of investigated materials the {gamma}-induced microvoids based on S{sub 1}{sup -}, As{sub 2}{sup -}, and Ge{sub 3}{sup -} coordination defects play the major role in positron annihilation processes. (Abstract Copyright[2002], Wiley Periodicals, Inc.)

  10. Simulation of the annihilation emission of galactic positrons; Modelisation de l'emission d'annihilation des positrons Galactiques

    Energy Technology Data Exchange (ETDEWEB)

    Gillard, W

    2008-01-15

    Positrons annihilate in the central region of our Galaxy. This has been known since the detection of a strong emission line centered on an energy of 511 keV in the direction of the Galactic center. This gamma-ray line is emitted during the annihilation of positrons with electrons from the interstellar medium. The spectrometer SPI, onboard the INTEGRAL observatory, performed spatial and spectral analyses of the positron annihilation emission. This thesis presents a study of the Galactic positron annihilation emission based on models of the different interactions undergone by positrons in the interstellar medium. The models are relied on our present knowledge of the properties of the interstellar medium in the Galactic bulge, where most of the positrons annihilate, and of the physics of positrons (production, propagation and annihilation processes). In order to obtain constraints on the positrons sources and physical characteristics of the annihilation medium, we compared the results of the models to measurements provided by the SPI spectrometer. (author)

  11. Superpositions of probability distributions.

    Science.gov (United States)

    Jizba, Petr; Kleinert, Hagen

    2008-09-01

    Probability distributions which can be obtained from superpositions of Gaussian distributions of different variances v=sigma;{2} play a favored role in quantum theory and financial markets. Such superpositions need not necessarily obey the Chapman-Kolmogorov semigroup relation for Markovian processes because they may introduce memory effects. We derive the general form of the smearing distributions in v which do not destroy the semigroup property. The smearing technique has two immediate applications. It permits simplifying the system of Kramers-Moyal equations for smeared and unsmeared conditional probabilities, and can be conveniently implemented in the path integral calculus. In many cases, the superposition of path integrals can be evaluated much easier than the initial path integral. Three simple examples are presented, and it is shown how the technique is extended to quantum mechanics.

  12. Measurement uncertainty and probability

    CERN Document Server

    Willink, Robin

    2013-01-01

    A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.

  13. Fractal probability laws.

    Science.gov (United States)

    Eliazar, Iddo; Klafter, Joseph

    2008-06-01

    We explore six classes of fractal probability laws defined on the positive half-line: Weibull, Frechét, Lévy, hyper Pareto, hyper beta, and hyper shot noise. Each of these classes admits a unique statistical power-law structure, and is uniquely associated with a certain operation of renormalization. All six classes turn out to be one-dimensional projections of underlying Poisson processes which, in turn, are the unique fixed points of Poissonian renormalizations. The first three classes correspond to linear Poissonian renormalizations and are intimately related to extreme value theory (Weibull, Frechét) and to the central limit theorem (Lévy). The other three classes correspond to nonlinear Poissonian renormalizations. Pareto's law--commonly perceived as the "universal fractal probability distribution"--is merely a special case of the hyper Pareto class.

  14. Bayesian Probability Theory

    Science.gov (United States)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  15. Superpositions of probability distributions

    Science.gov (United States)

    Jizba, Petr; Kleinert, Hagen

    2008-09-01

    Probability distributions which can be obtained from superpositions of Gaussian distributions of different variances v=σ2 play a favored role in quantum theory and financial markets. Such superpositions need not necessarily obey the Chapman-Kolmogorov semigroup relation for Markovian processes because they may introduce memory effects. We derive the general form of the smearing distributions in v which do not destroy the semigroup property. The smearing technique has two immediate applications. It permits simplifying the system of Kramers-Moyal equations for smeared and unsmeared conditional probabilities, and can be conveniently implemented in the path integral calculus. In many cases, the superposition of path integrals can be evaluated much easier than the initial path integral. Three simple examples are presented, and it is shown how the technique is extended to quantum mechanics.

  16. Paradoxes in probability theory

    CERN Document Server

    Eckhardt, William

    2013-01-01

    Paradoxes provide a vehicle for exposing misinterpretations and misapplications of accepted principles. This book discusses seven paradoxes surrounding probability theory.  Some remain the focus of controversy; others have allegedly been solved, however the accepted solutions are demonstrably incorrect. Each paradox is shown to rest on one or more fallacies.  Instead of the esoteric, idiosyncratic, and untested methods that have been brought to bear on these problems, the book invokes uncontroversial probability principles, acceptable both to frequentists and subjectivists. The philosophical disputation inspired by these paradoxes is shown to be misguided and unnecessary; for instance, startling claims concerning human destiny and the nature of reality are directly related to fallacious reasoning in a betting paradox, and a problem analyzed in philosophy journals is resolved by means of a computer program.

  17. Contributions to quantum probability

    Energy Technology Data Exchange (ETDEWEB)

    Fritz, Tobias

    2010-06-25

    Chapter 1: On the existence of quantum representations for two dichotomic measurements. Under which conditions do outcome probabilities of measurements possess a quantum-mechanical model? This kind of problem is solved here for the case of two dichotomic von Neumann measurements which can be applied repeatedly to a quantum system with trivial dynamics. The solution uses methods from the theory of operator algebras and the theory of moment problems. The ensuing conditions reveal surprisingly simple relations between certain quantum-mechanical probabilities. It also shown that generally, none of these relations holds in general probabilistic models. This result might facilitate further experimental discrimination between quantum mechanics and other general probabilistic theories. Chapter 2: Possibilistic Physics. I try to outline a framework for fundamental physics where the concept of probability gets replaced by the concept of possibility. Whereas a probabilistic theory assigns a state-dependent probability value to each outcome of each measurement, a possibilistic theory merely assigns one of the state-dependent labels ''possible to occur'' or ''impossible to occur'' to each outcome of each measurement. It is argued that Spekkens' combinatorial toy theory of quantum mechanics is inconsistent in a probabilistic framework, but can be regarded as possibilistic. Then, I introduce the concept of possibilistic local hidden variable models and derive a class of possibilistic Bell inequalities which are violated for the possibilistic Popescu-Rohrlich boxes. The chapter ends with a philosophical discussion on possibilistic vs. probabilistic. It can be argued that, due to better falsifiability properties, a possibilistic theory has higher predictive power than a probabilistic one. Chapter 3: The quantum region for von Neumann measurements with postselection. It is determined under which conditions a probability distribution on a

  18. Probability via expectation

    CERN Document Server

    Whittle, Peter

    1992-01-01

    This book is a complete revision of the earlier work Probability which ap­ peared in 1970. While revised so radically and incorporating so much new material as to amount to a new text, it preserves both the aim and the approach of the original. That aim was stated as the provision of a 'first text in probability, de­ manding a reasonable but not extensive knowledge of mathematics, and taking the reader to what one might describe as a good intermediate level'. In doing so it attempted to break away from stereotyped applications, and consider applications of a more novel and significant character. The particular novelty of the approach was that expectation was taken as the prime concept, and the concept of expectation axiomatized rather than that of a probability measure. In the preface to the original text of 1970 (reproduced below, together with that to the Russian edition of 1982) I listed what I saw as the advantages of the approach in as unlaboured a fashion as I could. I also took the view that the text...

  19. Early annihilation and diffuse backgrounds in models of weakly interacting massive particles in which the cross section for pair annihilation is enhanced by 1/upsilon.

    Science.gov (United States)

    Kamionkowski, Marc; Profumo, Stefano

    2008-12-31

    Recent studies have considered modifications to the standard weakly interacting massive particle scenario in which the pair annihilation cross section (times relative velocity v) is enhanced by a factor 1/upsilon to approximately 10(-3) in the Galaxy, enough to explain several puzzling Galactic radiation signals. We show that in these scenarios a burst of weakly interacting massive particle annihilation occurs in the first collapsed dark-matter halos. We show that severe constraints to the annihilation cross section derive from measurements of the diffuse extragalactic radiation and from ionization and heating of the intergalactic medium.

  20. Laboratory-tutorial activities for teaching probability

    Directory of Open Access Journals (Sweden)

    Roger E. Feeley

    2006-08-01

    Full Text Available We report on the development of students’ ideas of probability and probability density in a University of Maine laboratory-based general education physics course called Intuitive Quantum Physics. Students in the course are generally math phobic with unfavorable expectations about the nature of physics and their ability to do it. We describe a set of activities used to teach concepts of probability and probability density. Rudimentary knowledge of mechanics is needed for one activity, but otherwise the material requires no additional preparation. Extensions of the activities include relating probability density to potential energy graphs for certain “touchstone” examples. Students have difficulties learning the target concepts, such as comparing the ratio of time in a region to total time in all regions. Instead, they often focus on edge effects, pattern match to previously studied situations, reason about necessary but incomplete macroscopic elements of the system, use the gambler’s fallacy, and use expectations about ensemble results rather than expectation values to predict future events. We map the development of their thinking to provide examples of problems rather than evidence of a curriculum’s success.

  1. The Reasonable Explanation of Annihilation (fanā in Mysticism

    Directory of Open Access Journals (Sweden)

    Abulfazel Kiashemshaki

    2012-06-01

    Full Text Available Abstrak : Para mistikus melampaui tahap-tahap (derajat kesempurnaan mistik dan maqam spiritual dalam perjalanan mereka menuju Allah yang sulit dimengerti oleh non-mistikus. Karena itu, sebagian besar kalangan reflektif berusaha mengandalkan prinsip-prinsip intelektual dan teoritis mereka untuk memperoleh penjelasan atas pengalaman mistis dan ekstasi. Namun, keberhasilan penjelasannya sepenuhnya berada di bawah kuasa dan kekuatan dari prinsip-prinsip tersebut. (Fanā atau kesirnaan adalah salah satu tahap mistis atau keadaan yang sulit dimengerti oleh non-mistikus sehingga memunculkan berbagai teori yang berbeda untuk menjelaskannya. Pencapaian prestasi ilmiah dan filosofis merupakan penjelasan yang tepat atas fanā. Bagaimanapun upaya yang dilakukan dalam artikel ini membuktikan bahwa satu-satunya penjelasan yang sukses adalah penjelasan yang didasarkan pada prinsip-prinsip teosofi transendental dan mistisisme teoritis.Kata kunci : Fanā, Mistisisme, Penjelasan rasional, Mistisisme teoretis, Filsafat transendentalAbstract : Mystics are passing through stages (degrees of mystic perfection and esoteric abodes (spiritual stations in their spiritual journey to Allah which is difficult for unmystics to understand. Because of this, most of the reflective people are trying to rely on their intellectual and theoretical principles to obtain an understandable explanation of mystical experiences and ecstasy. However the success of such explanation completely lies in the power and strength of the above mentioned principles. (Fanā or annihilation is one of the mystical stages or states which unmystics find very difficult to reasonably understand, hence various and different theories have been provided for its explanation. Various scientific and philosophical achievements are appropriate explanations of annihilation (fanā which is valuable in its place; however effort had been made in this article to prove that the only successful explanation is an

  2. Entanglement Mapping VS. Quantum Conditional Probability Operator

    Science.gov (United States)

    Chruściński, Dariusz; Kossakowski, Andrzej; Matsuoka, Takashi; Ohya, Masanori

    2011-01-01

    The relation between two methods which construct the density operator on composite system is shown. One of them is called an entanglement mapping and another one is called a quantum conditional probability operator. On the base of this relation we discuss the quantum correlation by means of some types of quantum entropy.

  3. Improving Ranking Using Quantum Probability

    CERN Document Server

    Melucci, Massimo

    2011-01-01

    The paper shows that ranking information units by quantum probability differs from ranking them by classical probability provided the same data used for parameter estimation. As probability of detection (also known as recall or power) and probability of false alarm (also known as fallout or size) measure the quality of ranking, we point out and show that ranking by quantum probability yields higher probability of detection than ranking by classical probability provided a given probability of false alarm and the same parameter estimation data. As quantum probability provided more effective detectors than classical probability within other domains that data management, we conjecture that, the system that can implement subspace-based detectors shall be more effective than a system which implements a set-based detectors, the effectiveness being calculated as expected recall estimated over the probability of detection and expected fallout estimated over the probability of false alarm.

  4. Quark Annihilation and Lepton Formation versus Pair Production and Neutrino Oscillation: The Fourth Generation of Leptons

    Directory of Open Access Journals (Sweden)

    Zhang T. X.

    2011-04-01

    Full Text Available The emergence or formation of leptons from particles composed of quarks is still re- mained very poorly understood. In this paper, we propose that leptons are formed by quark-antiquark annihilations. There are two types of quark-antiquark annihilations. Type-I quark-antiquark annihilation annihilates only color charges, which is an incom- plete annihilation and forms structureless and colorless but electrically charged leptons such as electron, muon, and tau particles. Type-II quark-antiquark annihilation an- nihilates both electric and color charges, which is a complete annihilation and forms structureless, colorless, and electrically neutral leptons such as electron, muon, and tau neutrinos. Analyzing these two types of annihilations between up and down quarks and antiquarks with an excited quantum state for each of them, we predict the fourth gener- ation of leptons named lambda particle and neutrino. On the contrary quark-antiquark annihilation, a lepton particle or neutrino, when it collides, can be disintegrated into a quark-antiquark pair. The disintegrated quark-antiquark pair, if it is excited and / or changed in flavor during the collision, will annihilate into another type of lepton par- ticle or neutrino. This quark-antiquark annihilation and pair production scenario pro- vides unique understanding for the formation of leptons, predicts the fourth generation of leptons, and explains the oscillation of neutrinos without hurting the standard model of particle physics. With this scenario, we can understand the recent OPERA measure- ment of a tau particle in a muon neutrino beam as well as the early measurements of muon particles in electron neutrino beams.

  5. Direct photon production in e+e- annihilation

    Science.gov (United States)

    Fernandez, E.; Ford, W. T.; Qi, N.; Read, A. L.; Smith, J. G.; Camporesi, T.; de Sangro, R.; Marini, A.; Peruzzi, I.; Piccolo, M.; Ronga, F.; Blume, H. T.; Hurst, R. B.; Sleeman, J. C.; Venuti, J. P.; Wald, H. B.; Weinstein, Roy; Band, H. R.; Gettner, M. W.; Goderre, G. P.; Meyer, O. A.; Moromisato, J. H.; Shambroom, W. D.; von Goeler, E.; Ash, W. W.; Chadwick, G. B.; Clearwater, S. H.; Coombes, R. W.; Kaye, H. S.; Lau, K. H.; Leedy, R. E.; Lynch, H. L.; Messner, R. L.; Moss, L. J.; Muller, F.; Nelson, H. N.; Ritson, D. M.; Rosenberg, L. J.; Wiser, D. E.; Zdarko, R. W.; Groom, D. E.; Lee, H. Y.; Delfino, M. C.; Heltsley, B. K.; Johnson, J. R.; Lavine, T. L.; Maruyama, T.; Prepost, R.

    1985-01-01

    Direct photon production in hadronic events from e+e- annihilation has been studied at √s =29 GeV with use of the MAC detector at the PEP storage ring. A charge asymmetry A=(-12.3+/-3.5)% is observed in the final-state jets. The cross section and the charge asymmetry are in good agreement with the predictions of the fractionally charged quark-parton model. Both the charge asymmetry and total yield have been used to determine values of quark charges. Limits have been established for anomalous sources of direct photons.

  6. Improvements in determinations using the Cu-64 annihilation gamma rays.

    Science.gov (United States)

    Tomlin, Bryan E; Zeisler, Rolf

    2009-12-01

    The method of gammagamma coincidence counting has been applied to the determination of Cu via the (64)Cu annihilation gamma rays. Preliminary experiments show that at least an order of magnitude reduction in (24)Na interference may be obtained by employing the 511-511 keV coincidence peak rather than the singles 511-keV peak. The effect of the sample matrix on the yield of (24)Na pair-production events was investigated by a combination of experimental measurements and Monte Carlo calculations.

  7. Geant4 Simulation of Annihilation and excitation of Positronium

    CERN Document Server

    Gad, Andreas

    2016-01-01

    The purpose of this report is to document the work done during the summer of 2016 (18/6- 26/8) as a part of the CERN Summer Student Programme. The work has been done at the AEgIS (Antihydrogen Experiment: Gravity, Interferometry, Spectroscopy) collaboration under the supervision of Lillian Smestad and Michael Doser. The goal of the project was to create a Monte Carlo simulation in Geant4, of Positronium annihilation and excitation in the positron test chamber of the AEgIs experiment.

  8. e+ e- annihilation into J/psi + J/psi.

    Science.gov (United States)

    Bodwin, Geoffrey T; Lee, Jungil; Braaten, Eric

    2003-04-25

    Recent measurements by the Belle Collaboration of the exclusive production of two charmonia in e(+)e(-) annihilation differ substantially from theoretical predictions. We suggest that a significant part of the discrepancy can be explained by the process e(+)e(-)-->J/psi+J/psi. Because the J/psi+J/psi production process can proceed through fragmentation of two virtual photons into two cc pairs, its cross section may be larger than that for J/psi+eta(c) by about a factor of 3.7, in spite of a suppression factor alpha(2)/alpha(2)(s) that is associated with the QED and QCD coupling constants.

  9. Two-Photon Total Annihilation of Molecular Positronium

    CERN Document Server

    Pérez-Ríos, Jesús; Greene, Chris H

    2014-01-01

    The rate for complete two-photon annihilation of molecular positronium Ps$_{2}$ is reported. This decay channel involves a four-body collision among the fermions forming Ps$_{2}$, and two photons of 1.022 MeV, each, as the final state. The quantum electrodynamics result for the rate of this process is found to be $\\Gamma_{Ps_{2} \\rightarrow \\gamma\\gamma}$ = 9.0 $\\times 10^{-12}$ s$^{-1}$. This decay channel completes the most comprehensive decay chart for Ps$_{2}$ up to date.

  10. Signals in the Co-annihilation Region of Supersymmetry

    Science.gov (United States)

    Arnowitt, Richard; Aurisano, Adam; Dutta, Bhaskar; Kamon, Teruki; Simeon, Paul; Toback, David; Wagner, Peter; Kolev, Nikolay

    2006-04-01

    An unanswered problem in physics is the identity of the cold dark matter (CDM) in the universe. One of the leading candidates is a supersymmetric (SUSY) particle, the lightest neutralino. Recent cosmological measurements by the WMAP experiment have tightly constrained the SUSY parameter space in the mSUGRA model to the so called ``co-annihilation'' region in which the lightest supersymmetric tau lepton and the lightest neutralino are nearly degenerate in mass. We examine the prospects of using LHC detectors to measure this mass difference and present preliminary results.

  11. Search for rare quark-annihilation decays, B --> Ds(*) Phi

    CERN Document Server

    Aubert, B; Abrams, G S; Adye, T; Ahmed, M; Ahmed, S; Alam, M S; Albert, J; Aleksan, Roy; Allen, M T; Allison, J; Allmendinger, T; Altenburg, D; Andreassen, R; Andreotti, M; Angelini, C; Anulli, F; Arnaud, N; Aston, D; Azzolini, V; Baak, M; Back, J J; Baldini-Ferroli, R; Band, H R; Banerjee, Sw; Barate, R; Bard, D J; Barlow, N R; Barlow, R J; Barrett, M; Bartoldus, R; Batignani, G; Battaglia, M; Bauer, J M; Beck, T W; Behera, P K; Bellini, F; Benayoun, M; Benelli, G; Berger, N; Bernard, D; Berryhill, J W; Best, D; Bettarini, S; Bettoni, D; Bevan, A J; Bhimji, W; Bhuyan, B; Bianchi, F; Biasini, M; Biesiada, J; Blanc, F; Blaylock, G; Blinov, A E; Blinov, V E; Bloom, P C; Blount, N L; Bomben, M; Bondioli, M; Bonneaud, G R; Bosisio, L; Boutigny, D; Bowerman, D A; Boyarski, A M; Boyd, J T; Bozzi, C; Brandenburg, G; Brandt, T; Brau, J E; Breon, A B; Brose, J; Brown, C L; Brown, C M; Brown, D N; Bruinsma, M; Brunet, S; Bucci, F; Buchanan, C; Buchmüller, O L; Bugg, W; Bukin, A D; Bula, R; Bulten, H; Burchat, P R; Burke, J P; Button-Shafer, J; Buzzo, A; Bóna, M; Cahn, R N; Calabrese, R; Calcaterra, A; Calderini, G; Campagnari, C; Capra, R; Carpinelli, M; Cartaro, C; Cavallo, N; Cavoto, G; Cenci, R; Chai, X; Chaisanguanthum, K S; Chao, M; Charles, E; Charles, M J; Chauveau, J; Chavez, C A; Chen, A; Chen, C; Chen, E; Chen, J C; Chen, S; Chen, X; Cheng, B; Cheng, C H; Chia, Y M; Cibinetto, G; Clark, P J; Claus, R; Cochran, J; Coleman, J P; Contri, R; Convery, M R; Cossutti, F; Cottingham, W N; Couderc, F; Covarelli, R; Cowan, G; Cowan, R; Crawley, H B; Cremaldi, L; Cristinziani, M; Cunha, A; Curry, S; Côté, D; D'Orazio, A; Dahmes, B; Dallapiccola, C; Danielson, N; Dasu, S; Datta, M; Dauncey, P D; David, P; Davier, M; Davis, C L; Day, C T; De Groot, N; De Nardo, Gallieno; De Sangro, R; Del Buono, L; Del Re, D; Della Ricca, G; Di Lodovico, F; Di Marco, E; Diberder; Dickopp, M; Dingfelder, J C; Dittongo, S; Dong, D; Dong, L; Dorfan, J; Druzhinin, V P; Dubitzky, R S; Dubois-Felsmann, G P; Dujmic, D; Dunwoodie, W M; Dvoretskii, A; Eckhart, E A; Eckmann, R; Edgar, C L; Edwards, A J; Egede, U; Eichenbaum, A M; Eigen, G; Eisner, A M; Elmer, P; Emery, S; Ernst, J A; Eschenburg, V; Eschrich, I; Eyges, V; Fabozzi, F; Faccini, R; Fan, S; Feltresi, E; Ferrarotto, F; Ferroni, F; Field, R C; Finocchiaro, G; Flacco, C J; Flack, R L; Flächer, H U; Flood, K T; Ford, K E; Ford, W T; Forster, I J; Forti, F; Fortin, D; Foulkes, S D; Franek, B; Frey, R; Fritsch, M; Fry, J R; Fulsom, B G; Gabathuler, E; Gaidot, A; Gaillard, J R; Galeazzi, F; Gallo, F; Gamba, D; Gamet, R; Gan, K K; Ganzhur, S F; Gary, J W; Gaspero, M; Gatto, C; George, K A; Gill, M S; Giorgi, M A; Giroux, X; Gladney, L; Glanzman, T; Godang, R; Goetzen, K; Golubev, V B; Gopal, G P; Gowdy, S J; Gradl, W; Graham, M T; Grancagnolo, S; Graugès-Pous, E; Graziani, G; Green, M G; Grenier, P; Gritsan, A V; Grosdidier, G; Groysman, Y; Guo, Q H; Hadavand, H K; Hadig, T; Haire, M; Halyo, V; Hamano, K; Hamel de Monchenault, G; Hamon, O; Harrison, P F; Harrison, T J; Hart, A J; Hartfiel, B L; Hast, C; Hauke, A; Hawkes, C M; Hearty, C; Held, T; Hertzbach, S S; Heusch, C A; Hill, E J; Hirschauer, J F; Hitlin, D G; Hodgkinson, M C; Hollar, J J; Hong, T M; Honscheid, K; Hopkins, D A; Hrynóva, T; Hufnagel, D; Hulsbergen, W D; Hutchcroft, D E; Höcker, A; Igonkina, O; Innes, W R; Izen, J M; Jackson, P D; Jackson, P S; Jacobsen, R G; Jawahery, A; Jessop, C P; John, M J J; Johnson, J R; Judd, D; Kadel, R W; Kadyk, J; Kagan, H; Karyotakis, Yu; Kass, R; Kelly, M P; Kelsey, M H; Kerth, L T; Khan, A; Kim, H; Kim, P; Kirkby, D; Kitayama, I; Klose, V; Knecht, N S; Koch, H; Kocian, M L; Koeneke, K; Kofler, R; Kolomensky, Yu G; Kovalskyi, D; Kowalewski, R V; Kozanecki, Witold; Kravchenko, E A; Kreisel, A; Krishnamurthy, M; Kroeger, R; Kroseberg, J; Kukartsev, G; Kutter, P E; Kyberd, P; La Vaissière, C de; Lacker, H M; Lae, C K; Lafferty, G D; Lanceri, L; Lange, D J; Langenegger, U; Lankford, A J; Latham, T E; Latour, E; Lau, Y P; Lazzaro, A; Le, F; Lees, J P; Legendre, M; Leith, D W G S; Lepeltier, V; Leruste, P; Lewandowski, B; Li Gioi, L; Li, H; Li, X; Libby, J; Lista, L; Liu, R; Lo Vetere, M; LoSecco, J M; Lockman, W S; Lombardo, V; London, G W; Long, O; Lou, X C; Lu, M; Luitz, S; Lund, P; Luppi, E; Lusiani, A; Lutz, A M; Lynch, G; Lynch, H L; Lü, C; Lüth, V; MacFarlane, D B; Macri, M M; Mader, W F; Majewski, S A; Malcles, J; Mallik, U; Mancinelli, G; Mandelkern, M A; Marchiori, G; Margoni, M; Marks, J; Marsiske, H; Martínez-Vidal, F; Mattison, T S; Mayer, B; Mazur, M A; Mazzoni, M A; McKenna, J A; McMahon, T R; Meadows, B T; Mellado, B; Menges, W; Messner, R; Meyer, W T; Mihályi, A; Minamora, J S; Mir, L M; Mohanty, G B; Mohapatra, A K; Mommsen, R K; Monge, M R; Monorchio, D; Moore, T B; Morandin, M; Morgan, S E; Morganti, M; Morganti, S; Morii, M; Muheim, F; Müller, D R; Naisbit, M T; Narsky, I; Nash, J A; Nauenberg, U; Neal, H; Negrini, M; Neri, N; Nesom, G; Nicholson, H; Nikolich, M B; Nogowski, R; Nugent, I M; O'Grady, C P; Ocariz, J; Oddone, P J; Ofte, I; Olaiya, E O; Olivas, A; Olsen, J; Onuchin, A P; Orimoto, T J; Otto, S; Oyanguren, A; Ozcan, V E; Paar, H P; Pacetti, S; Palano, A; Palombo, F; Pan, Y; Panduro-Vazquez, W; Panetta, J; Panvini, R S; Paoloni, E; Paolucci, P; Pappagallo, M; Parry, R J; Passaggio, S; Patel, P M; Patrignani, C; Patteri, P; Payne, D J; Pelizaeus, M; Perazzo, A; Perl, M; Peruzzi, I M; Peters, K; Petersen, B A; Petersen, T C; Petzold, A; Piatenko, T; Piccolo, D; Piccolo, M; Piemontese, L; Pierini, M; Pioppi, M; Piredda, G; Plaszczynski, S; Playfer, S; Poireau, V; Polci, F; Pompili, A; Porter, F C; Posocco, M; Potter, C T; Prell, S; Prepost, R; Pripstein, M; Pulliam, T; Purohit, M V; Qi, N D; Rahatlou, S; Rahimi, A M; Rahmat, R; Rama, M; Ratcliff, B N; Raven, G; Reidy, J; Ricciardi, S; Richman, J D; Ritchie, J L; Rizzo, G; Roat, C; Roberts, D A; Robertson, S H; Robutti, E; Rodier, S; Roe, N A; Ronan, M T; Roney, J M; Rong, G; Roodman, A; Roos, L; Rosenberg, E I; Rotondo, M; Roudeau, P; Rubin, A E; Ruddick, W O; Ryd, A; Röthel, W; Sacco, R; Saeed, M A; Safai-Tehrani, F; Saleem, M; Salnikov, A A; Salvatore, F; Samuel, A; Sanders, D A; Santroni, A; Saremi, S; Satpathy, A; Schalk, T; Schenk, S; Schindler, R H; Schofield, K C; Schott, G; Schrenk, S; Schröder, T; Schröder, H; Schubert, J; Schubert, K R; Schumm, B A; Schune, M H; Schwiening, J; Schwierz, R; Schwitters, R F; Sciacca, C; Sciolla, G; Seiden, A; Sekula, S J; Serednyakov, S I; Sharma, V; Shen, B C; Simi, G; Simonetto, F; Sinev, N B; Skovpen, Yu I; Smith, A J S; Smith, J G; Snoek, H L; Snyder, A; Sobie, R J; Soffer, A; Sokoloff, M D; Solodov, E P; Spaan, B; Spanier, S M; Spitznagel, M; Spradlin, P; Steinke, M; Stelzer, J; Stocchi, A; Stoker, D P; Stroili, R; Strom, D; Strube, J; Stugu, B; Stängle, H; Su, D; Sullivan, M K; Summers, D J; Sundermann, J E; Suzuki, K; Swain, S K; Tan, P; Taras, P; Taylor, F; Telnov, A V; Teodorescu, L; Ter-Antonian, R; Therin, G; Thiebaux, C; Thompson, J M; Tisserand, V; Toki, W H; Torrence, E; Tosi, S; Touramanis, C; Ulmer, K A; Uwer, U; Van Bakel, N; Vasileiadis, G; Vasseur, G; Vavra, J; Verderi, M; Verkerke, W; Viaud, F B; Vitale, L; Voci, C; Voena, C; Wagner, S R; Wagoner, D E; Waldi, R; Walsh, J; Wang, K; Wang, P; Wang, W F; Wappler, F R; Watson, A T; Weaver, M; Weidemann, A W; Weinstein, A J R; Wenzel, W A; Wilden, L; Williams, D C; Williams, J C; Willocq, S Y; Wilson, F F; Wilson, J R; Wilson, M G; Wilson, R J; Winklmeier, F; Wisniewski, W J; Wittgen, M; Wong, Q K; Wormser, G; Wright, D H; Wright, D M; Wu, J; Wu, S L; Xie, Y; Yamamoto, R K; Yarritu, A K; Ye, S; Yi, J I; Yi, K; Young, C C; Yu, Z; Yushkov, A N; Yéche, C; Zain, S B; Zallo, A; Zeng, Q; Zghiche, A; Zhang, J; Zhang, L; Zhao, H W; Zhu, Y S; Ziegler, V; Zito, M; Çuhadar-Dönszelmann, T

    2006-01-01

    We report on searches for B- --> Ds- Phi and B- --> Ds*- Phi. In the context of the Standard Model, these decays are expected to be highly suppressed since they proceed through annihilation of the b and u-bar quarks in the B- meson. Our results are based on 234 million Upsilon(4S) --> B Bbar decays collected with the BABAR detector at SLAC. We find no evidence for these decays, and we set Bayesian 90% confidence level upper limits on the branching fractions BF(B- --> Ds- Phi) Ds*- Phi)<1.2x10^(-5). These results are consistent with Standard Model expectations.

  12. Positron annihilation lifetime spectroscopy study of roller burnished magnesium alloy

    Directory of Open Access Journals (Sweden)

    Zaleski Radosław

    2015-12-01

    Full Text Available The effect of roller burnishing on Vickers’ hardness and positron lifetimes in the AZ91HP magnesium alloy was studied. The microhardness increases with an increase in the burnishing force and with a decrease in the feed. The comparison of various methods of analysis of positron annihilation lifetime (PAL spectra allowed identification of two components, which are related to solute-vacancy complexes and vacancy clusters, respectively. It was found that the increase in microhardness was related to the increase in the concentration of vacancy clusters.

  13. Positrons from dark matter annihilation in the galactic halo: uncertainties

    CERN Document Server

    Fornengo, N; Lineros, R; Donato, F; Salati, P

    2007-01-01

    Indirect detection signals from dark matter annihilation are studied in the positron channel. We discuss in detail the positron propagation inside the galactic medium: we present novel solutions of the diffusion and propagation equations and we focus on the determination of the astrophysical uncertainties which affect the positron dark matter signal. We show that, especially in the low energy tail of the positron spectra at Earth, the uncertainty is sizeable and we quantify the effect. Comparison of our predictions with current available and foreseen experimental data are derived.

  14. Search for Dark Matter Annihilation in Draco with STACEE

    CERN Document Server

    Driscoll, D D; Carson, J E; Covault, C E; Fortin, P; Gingrich, D M; Hanna, D S; Jarvis, A; Kildea, J; Lindner, T; Müller, C; Mukherjee, R; Ong, R A; Ragan, K; Williams, D A; Zweerink, J

    2007-01-01

    For some time, the Draco dwarf spheroidal galaxy has garnered interest as a possible source for the indirect detection of dark matter. Its large mass-to-light ratio and relative proximity to the Earth provide favorable conditions for the production of detectable gamma rays from dark matter self-annihilation in its core. The Solar Tower Atmospheric Cherenkov Effect Experiment (STACEE) is an air-shower Cherenkov telescope located in Albuquerque, NM capable of detecting gamma rays at energies above 100 GeV. We present the results of the STACEE observations of Draco during the 2005-2006 observing season totaling 10 hours of livetime after cuts.

  15. Probability Density Evolution Analysis for Stochastic Dynamic Seismic Responses of Structures Based on Improved Point Estimation Method%基于改进点估计法的结构随机动力地震反应概率密度演化

    Institute of Scientific and Technical Information of China (English)

    宋鹏彦; 吕大刚; 于晓辉; 王光远

    2014-01-01

    为了获得结构反应概率密度随时间的变化规律,将改进的点估计法、最大熵原理与随机动力学的概率密度演化理论相结合,提出了基于统计矩信息的结构非线性随机动力反应概率密度演化分析方法。以一栋按我国规范设计的钢筋混凝土框架结构为研究对象,选取结构在地震作用下的顶层位移和整体地震损伤指数作为反应参数,并考虑结构参数的不确定性,用本文提出的方法进行了地震作用下结构的非线性随机动力反应的概率密度演化分析及参数灵敏度分析,结果表明:钢筋屈服强度、结构的阻尼、混凝土容重对结构的位移反应影响较为明显,灵敏度超过10%。%In order to obtain the law of probability densities of structural responses varying with time, a new moment-based approach for analysis of probability density evolution of nonlinear stochastic dynamic responses of structures was developed,by combining an improved point estimation method (IPEM)with the maximum entropy theory and the probability density evolution theory for stochastic dynamics of structures. The proposed method was then used to perform probabilistic density evolution analysis and parameter sensitivity analysis of a reinforced concrete (RC ) frame structure designed according to Chinese codes,selecting the top displacement and global seismic damage index of the structure under earthquake as response parameters,and taking into account the uncertainty of structural parameters. The results show that the steel yield strength,the structural damping ratio,and concrete gravity density have dominant influences on structural displacement,with a sensitivity of more than 10%.

  16. Applying Popper's Probability

    CERN Document Server

    Whiting, Alan B

    2014-01-01

    Professor Sir Karl Popper (1902-1994) was one of the most influential philosophers of science of the twentieth century, best known for his doctrine of falsifiability. His axiomatic formulation of probability, however, is unknown to current scientists, though it is championed by several current philosophers of science as superior to the familiar version. Applying his system to problems identified by himself and his supporters, it is shown that it does not have some features he intended and does not solve the problems they have identified.

  17. Probability for physicists

    CERN Document Server

    Sirca, Simon

    2016-01-01

    This book is designed as a practical and intuitive introduction to probability, statistics and random quantities for physicists. The book aims at getting to the main points by a clear, hands-on exposition supported by well-illustrated and worked-out examples. A strong focus on applications in physics and other natural sciences is maintained throughout. In addition to basic concepts of random variables, distributions, expected values and statistics, the book discusses the notions of entropy, Markov processes, and fundamentals of random number generation and Monte-Carlo methods.

  18. Gamma-induced positron annihilation spectroscopy and application to radiation-damaged alloys

    Science.gov (United States)

    Wells, D. P.; Hunt, A. W.; Tchelidze, L.; Kumar, J.; Smith, K.; Thompson, S.; Selim, F.; Williams, J.; Harmon, J. F.; Maloy, S.; Roy, A.

    2006-06-01

    Radiation damage and other defect studies of materials are limited to thin samples because of inherent limitations of well-established techniques such as diffraction methods and traditional positron annihilation spectroscopy (PAS) [P. Hautojarvi, et al., Positrons in Solids, Springer, Berlin, 1979, K.G. Lynn, et al., Appl. Phys. Lett. 47 (1985) 239]. This limitation has greatly hampered industrial and in-situ applications. ISU has developed new methods that use pair-production to produce positrons throughout the volume of thick samples [F.A. Selim, D.P. Wells, et al., Nucl. Instr. and Meth. B 192 (2002) 197, F.A. Selim, D.P. Wells, et al., Nucl. Instru. Meth. A 495 (2002) 154, F.A. Selim, et al., J. Rad. Phys. Chem. 68 (2004) 427, F.A. Selim, D.P. Wells, et al., Nucl. Instr. and Meth. B 241 (2005) 253, A.W. Hunt, D.P. Wells, et al., Nucl. Instr. and Meth. B. 241 (2005) 262]. Unlike prior work at other laboratories that use bremsstrahlung beams to create positron beams (via pair-production) that are then directed at a sample of interest, we produce electron-positron pairs directly in samples of interest, and eliminate the intermediate step of a positron beam and its attendant penetrability limitations. Our methods include accelerator-based bremsstrahlung-induced pair-production in the sample for positron annihilation energy spectroscopy measurements (PAES), coincident proton-capture gamma-rays (where one of the gammas is used for pair-production in the sample) for positron annihilation lifetime spectroscopy (PALS), or photo-nuclear activation of samples for either type of measurement. The positrons subsequently annihilate with sample electrons, emitting coincident 511 keV gamma-rays [F.A. Selim, D.P. Wells, et al., Nucl. Instr. and Meth. B 192 (2002) 197, F.A. Selim, D.P. Wells, et al., Nucl. Instru. Meth. A 495 (2002) 154, F.A. Selim, et al., J. Rad. Phys. Chem. 68 (2004) 427, F.A. Selim, D.P. Wells, et al., Nucl. Instr. and Meth. B 241 (2005) 253, A.W. Hunt, D

  19. CMB bounds on dark matter annihilation: Nucleon energy losses after recombination

    NARCIS (Netherlands)

    Weniger, C.; Serpico, P.D.; Iocco, F.; Bertone, G.

    2013-01-01

    We consider the propagation and energy losses of protons and antiprotons produced by dark matter annihilation at redshifts 100annihilations into quarks, gluons and weak gauge bosons, protons and antiprotons carry about 20% of the energy injected into e± and γ’s, b

  20. A Position Annihilation Study of Defect Recovery in Electron-Irradiated alpha-Zr

    DEFF Research Database (Denmark)

    Hood, G. M.; Eldrup, Morten Mostgaard; Mogensen, O. E.

    1977-01-01

    The presence of vacancy defects in α-Zr, irradiated at 320 > T > 290 K with 1.5 MeV electrons, has been indicated by positron annihilation measurements. It was found that positron lifetimes associated with annihilation in well-annealed α-Zr, fell in the range 173 to 181 psec, with no obvious depe...