WorldWideScience

Sample records for annihilation probability density

  1. Annihilation probability density and other applications of the Schwinger multichannel method to the positron and electron scattering

    International Nuclear Information System (INIS)

    We have calculated annihilation probability densities (APD) for positron collisions against He atom and H2 molecule. It was found that direct annihilation prevails at low energies, while annihilation following virtual positronium (Ps) formation is the dominant mechanism at higher energies. In room-temperature collisions (10-2 eV) the APD spread over a considerable extension, being quite similar to the electronic densities of the targets. The capture of the positron in an electronic Feshbach resonance strongly enhanced the annihilation rate in e+-H2 collisions. We also discuss strategies to improve the calculation of the annihilation parameter (Zeff ), after debugging the computational codes of the Schwinger Multichannel Method (SMC). Finally, we consider the inclusion of the Ps formation channel in the SMC and show that effective configurations (pseudo eigenstates of the Hamiltonian of the collision ) are able to significantly reduce the computational effort in positron scattering calculations. Cross sections for electron scattering by polyatomic molecules were obtained in three different approximations: static-exchange (SE); tatic-exchange-plus-polarization (SEP); and multichannel coupling. The calculations for polar targets were improved through the rotational resolution of scattering amplitudes in which the SMC was combined with the first Born approximation (FBA). In general, elastic cross sections (SE and SEP approximations) showed good agreement with available experimental data for several targets. Multichannel calculations for e- -H2O scattering, on the other hand, presented spurious structures at the electronic excitation thresholds (author)

  2. Annihilation probability density and other applications of the Schwinger multichannel method to the positron and electron scattering; Densidade de probabilidade de aniquilacao e outras aplicacoes do metodo multicanal de Schwinger ao espalhamento de positrons e eletrons

    Energy Technology Data Exchange (ETDEWEB)

    Varella, Marcio Teixeira do Nascimento

    2001-12-15

    We have calculated annihilation probability densities (APD) for positron collisions against He atom and H{sub 2} molecule. It was found that direct annihilation prevails at low energies, while annihilation following virtual positronium (Ps) formation is the dominant mechanism at higher energies. In room-temperature collisions (10{sup -2} eV) the APD spread over a considerable extension, being quite similar to the electronic densities of the targets. The capture of the positron in an electronic Feshbach resonance strongly enhanced the annihilation rate in e{sup +}-H{sub 2} collisions. We also discuss strategies to improve the calculation of the annihilation parameter (Z{sub eff} ), after debugging the computational codes of the Schwinger Multichannel Method (SMC). Finally, we consider the inclusion of the Ps formation channel in the SMC and show that effective configurations (pseudo eigenstates of the Hamiltonian of the collision ) are able to significantly reduce the computational effort in positron scattering calculations. Cross sections for electron scattering by polyatomic molecules were obtained in three different approximations: static-exchange (SE); tatic-exchange-plus-polarization (SEP); and multichannel coupling. The calculations for polar targets were improved through the rotational resolution of scattering amplitudes in which the SMC was combined with the first Born approximation (FBA). In general, elastic cross sections (SE and SEP approximations) showed good agreement with available experimental data for several targets. Multichannel calculations for e{sup -} -H{sub 2}O scattering, on the other hand, presented spurious structures at the electronic excitation thresholds (author)

  3. Trajectory probability hypothesis density filter

    OpenAIRE

    García-Fernández, Ángel F.; Svensson, Lennart

    2016-01-01

    This paper presents the probability hypothesis density (PHD) filter for sets of trajectories. The resulting filter, which is referred to as trajectory probability density filter (TPHD), is capable of estimating trajectories in a principled way without requiring to evaluate all measurement-to-target association hypotheses. As the PHD filter, the TPHD filter is based on recursively obtaining the best Poisson approximation to the multitrajectory filtering density in the sense of minimising the K...

  4. Probability densities and Lévy densities

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler

    For positive Lévy processes (i.e. subordinators) formulae are derived that express the probability density or the distribution function in terms of power series in time t. The applicability of the results to finance and to turbulence is briefly indicated.......For positive Lévy processes (i.e. subordinators) formulae are derived that express the probability density or the distribution function in terms of power series in time t. The applicability of the results to finance and to turbulence is briefly indicated....

  5. Probability densities in strong turbulence

    Science.gov (United States)

    Yakhot, Victor

    2006-03-01

    In this work we, using Mellin’s transform combined with the Gaussian large-scale boundary condition, calculate probability densities (PDFs) of velocity increments P(δu,r), velocity derivatives P(u,r) and the PDF of the fluctuating dissipation scales Q(η,Re), where Re is the large-scale Reynolds number. The resulting expressions strongly deviate from the Log-normal PDF P(δu,r) often quoted in the literature. It is shown that the probability density of the small-scale velocity fluctuations includes information about the large (integral) scale dynamics which is responsible for the deviation of P(δu,r) from P(δu,r). An expression for the function D(h) of the multifractal theory, free from spurious logarithms recently discussed in [U. Frisch, M. Martins Afonso, A. Mazzino, V. Yakhot, J. Fluid Mech. 542 (2005) 97] is also obtained.

  6. Trajectory versus probability density entropy

    Science.gov (United States)

    Bologna, Mauro; Grigolini, Paolo; Karagiorgis, Markos; Rosa, Angelo

    2001-07-01

    We show that the widely accepted conviction that a connection can be established between the probability density entropy and the Kolmogorov-Sinai (KS) entropy is questionable. We adopt the definition of density entropy as a functional of a distribution density whose time evolution is determined by a transport equation, conceived as the only prescription to use for the calculation. Although the transport equation is built up for the purpose of affording a picture equivalent to that stemming from trajectory dynamics, no direct use of trajectory time evolution is allowed, once the transport equation is defined. With this definition in mind we prove that the detection of a time regime of increase of the density entropy with a rate identical to the KS entropy is possible only in a limited number of cases. The proposals made by some authors to establish a connection between the two entropies in general, violate our definition of density entropy and imply the concept of trajectory, which is foreign to that of density entropy.

  7. Investigation of density inhomogeneities in liquids by positron annihilation

    International Nuclear Information System (INIS)

    The case of positronium diffusion and annihilation in micellar solutions as well as in liquid normal alkanes is discussed. The traps are assumed to be the structural sparse density regions in these liquids. The traps in micellar solutions are the micelles, in alkanes they are found around the terminal -CH3 groups. The surface tension inside the micellar core (one of the basic parameters of micellization) is determined around the site of o-Ps solubilization. The o-Ps diffusivity parameters are determined in both systems. (K.A.) 48 refs.; 4 figs

  8. Chaos for Liouville probability densities

    CERN Document Server

    Schack, R

    1995-01-01

    Using the method of symbolic dynamics, we show that a large class of classical chaotic maps exhibit exponential hypersensitivity to perturbation, i.e., a rapid increase with time of the information needed to describe the perturbed time evolution of the Liouville density, the information attaining values that are exponentially larger than the entropy increase that results from averaging over the perturbation. The exponential rate of growth of the ratio of information to entropy is given by the Kolmogorov-Sinai entropy of the map. These findings generalize and extend results obtained for the baker's map [R. Schack and C. M. Caves, Phys. Rev. Lett. 69, 3413 (1992)].

  9. Modulation Based on Probability Density Functions

    Science.gov (United States)

    Williams, Glenn L.

    2009-01-01

    A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.

  10. On Explicit Probability Densities Associated with Fuss-Catalan Numbers

    OpenAIRE

    Liu, Dang-Zheng; Song, Chunwei; Wang, Zheng-Dong

    2010-01-01

    In this note we give explicitly a family of probability densities, the moments of which are Fuss-Catalan numbers. The densities appear naturally in random matrices, free probability and other contexts.

  11. Advantages of the probability amplitude over the probability density in quantum mechanics

    OpenAIRE

    Kurihara, Yoshimasa; Quach, Nhi My Uyen

    2013-01-01

    We discuss reasons why a probability amplitude, which becomes a probability density after squaring, is considered as one of the most basic ingredients of quantum mechanics. First, the Heisenberg/Schrodinger equation, an equation of motion in quantum mechanics, describes a time evolution of the probability amplitude rather than of a probability density. There may be reasons why dynamics of a physical system are described by amplitude. In order to investigate one role of the probability amplitu...

  12. Diffuse Gamma Ray Background from Annihilating Dark Matter in Density Spikes around Supermassive Black Holes

    OpenAIRE

    Belikov, Alexander; Silk, Joseph

    2013-01-01

    Dark matter annihilation is proportional to the square of the density and is especially efficient in places of highest concentration of dark matter, such as dark matter spikes. The spikes are formed as a result of contraction of the dark matter density profile caused by adiabatic growth of a supermassive black hole at the center of the dark matter halo or subhalo. We revisit the relation between the properties and mass functions of dark matter halos and spikes, and propose alternative models ...

  13. Asymptotic probability density functions in turbulence

    OpenAIRE

    Minotti, F. O.; Speranza, E.

    2007-01-01

    A formalism is presented to obtain closed evolution equations for asymptotic probability distribution functions of turbulence magnitudes. The formalism is derived for a generic evolution equation, so that the final result can be easily applied to rather general problems. Although the approximation involved cannot be ascertained a priori, we show that application of the formalism to well known problems gives the correct results.

  14. Probability density of quantum expectation values

    Energy Technology Data Exchange (ETDEWEB)

    Campos Venuti, L., E-mail: lcamposv@usc.edu; Zanardi, P.

    2013-10-30

    We consider the quantum expectation value A=〈ψ|A|ψ〉 of an observable A over the state |ψ〉. We derive the exact probability distribution of A seen as a random variable when |ψ〉 varies over the set of all pure states equipped with the Haar-induced measure. To illustrate our results we compare the exact predictions for few concrete examples with the concentration bounds obtained using Levy's lemma. We also comment on the relevance of the central limit theorem and finally draw some results on an alternative statistical mechanics based on the uniform measure on the energy shell. - Highlights: • We compute the probability distribution of quantum expectation values for states sampled uniformly. • As a special case we consider in some detail the degenerate case where A is a one-dimensional projector. • We compare the concentration results obtained using Levy's lemma with the exact values obtained using our exact formulae. • We comment on the possibility of a Central Limit Theorem and show approach to Gaussian for a few physical operators. • Some implications of our results for the so-called “Quantum Microcanonical Equilibration” (Refs. [5–9]) are derived.

  15. Annihilation Radiation Gauge for Relative Density and Multiphase Fluid Monitoring

    Directory of Open Access Journals (Sweden)

    Vidal A.

    2014-03-01

    Full Text Available The knowledge of the multi-phase flow parameters are important for the petroleum industry, specifically during the transport in pipelines and network related to exploitation’s wells. Crude oil flow is studied by Monte Carlo simulation and experimentally to determine transient liquid phase in a laboratory system. Relative density and fluid phase time variation is monitored employing a fast nuclear data acquisition setup that includes two large volume BaF2 scintillator detectors coupled to an electronic chain and data display in a LabView® environment. Fluid parameters are determined by the difference in count rate of coincidence pulses. The operational characteristics of the equipment indicate that 2 % deviation in the CCR corresponds to a variation, on average, of 20 % in the fraction of liquid of the multiphase fluid.

  16. Asymptotic Theory for the Probability Density Functions in Burgers Turbulence

    CERN Document Server

    Weinan, E; Eijnden, Eric Vanden

    1999-01-01

    A rigorous study is carried out for the randomly forced Burgers equation in the inviscid limit. No closure approximations are made. Instead the probability density functions of velocity and velocity gradient are related to the statistics of quantities defined along the shocks. This method allows one to compute the anomalies, as well as asymptotics for the structure functions and the probability density functions. It is shown that the left tail for the probability density function of the velocity gradient has to decay faster than $|\\xi|^{-3}$. A further argument confirms the prediction of E et al., Phys. Rev. Lett. {\\bf 78}, 1904 (1997), that it should decay as $|\\xi|^{-7/2}$.

  17. Validating Forecasts of the Joint Probability Density of Bond Yields:...

    OpenAIRE

    Egorov, Alexei V.; Yongmiao Hong; Haitao Li

    2013-01-01

    Most existing empirical studies on affine term structure models (ATSMs) have mainly focused on in-sample goodness-of-fit of historical bond yields and ignored out-of-sample forecast of future bond yields. Using an omnibus nonparametric procedure for density forecast evaluation in a continuous-time framework, we provide probably the first comprehensive empirical analysis of the out-of-sample performance of ATSMs in forecasting the joint conditional probability density of bond yields. We find t...

  18. Hilbert Space of Probability Density Functions Based on Aitchison Geometry

    Institute of Scientific and Technical Information of China (English)

    J. J. EGOZCUE; J. L. D(I)AZ-BARRERO; V. PAWLOWSKY-GLAHN

    2006-01-01

    The set of probability functions is a convex subset of L1 and it does not have a linear space structure when using ordinary sum and multiplication by real constants. Moreover, difficulties arise when dealing with distances between densities. The crucial point is that usual distances are not invariant under relevant transformations of densities. To overcome these limitations, Aitchison's ideas on compositional data analysis are used, generalizing perturbation and power transformation, as well as the Aitchison inner product, to operations on probability density functions with support on a finite interval. With these operations at hand, it is shown that the set of bounded probability density functions on finite intervals is a pre-Hilbert space. A Hilbert space of densities, whose logarithm is square-integrable, is obtained as the natural completion of the pre-Hilbert space.

  19. On positron annihilation in zinc

    International Nuclear Information System (INIS)

    The purpose of this work is to understand Mogensen's and Petersen's positron annihilation curves for zinc. Mijnarends approach is used as an auxiliary method of localizing inhomogeneities of the electronic density in momentum space, as defined in the paper. Evidence is found for a new effect consisting of a strong enhancement of the annihilation probability in the lenses obtained by the intersection of the Fermi surface with HMC surfaces. This effect, not the anisotropy of the Fermi surface, is the main reason for the anisotropy of the annihilation curves. (orig.)

  20. The Probability Distribution Function of Column Density in Molecular Clouds

    CERN Document Server

    Vázquez-Semadeni, E; Vazquez-Semadeni, Enrique; Garcia, Nieves

    2001-01-01

    We discuss the probability distribution function (PDF) of column density resulting from density fields with lognormal PDFs, applicable to molecular clouds. For magnetic and non-magnetic numerical simulations of compressible, isothermal turbulence, we show that the density autocorrelation function (ACF) decays over short distances compared to the simulation size. The density "events" along a line of sight can be assumed to be independent over distances larger than this, and the Central Limit Theorem should be applicable. However, using random realizations of lognormal fields, we show that the convergence to a Gaussian shape is extremely slow in the high-density tail, and thus the column density PDF is not expected to exhibit a unique functional shape, but to transit instead from a lognormal to a Gaussian form as the column length increases, with decreasing variance. For intermediate path lengths, the column density PDF assumes a nearly exponential decay. For cases with density contrasts of $10^4$, comparable t...

  1. Does the probability density imply the equation of motion?

    International Nuclear Information System (INIS)

    Full text: The laws of physics dictate the evolution of matter and radiation. Quantum mechanics postulates that the matter or radiation is associated with a field whose magnitude is interpreted as the probability density, which is the only observable quantity. In general this field is either a single-component or multi-component complex scalar field, whose laws of evolution may be expressed in the form of partial differential equations. One may ask does the probability density of the complex scalar field imply the evolution of the field? Here we answer this fundamental question by examining a means for measuring the equation of motion of a single-component complex scalar field associated with a non-dissipative and nonlinear system, given measurements of the probability density. Applications of this formalism, to a number of systems in condensed matter physics, will be discussed

  2. Probability density function modeling for sub-powered interconnects

    Science.gov (United States)

    Pater, Flavius; Amaricǎi, Alexandru

    2016-06-01

    This paper proposes three mathematical models for reliability probability density function modeling the interconnect supplied at sub-threshold voltages: spline curve approximations, Gaussian models,and sine interpolation. The proposed analysis aims at determining the most appropriate fitting for the switching delay - probability of correct switching for sub-powered interconnects. We compare the three mathematical models with the Monte-Carlo simulations of interconnects for 45 nm CMOS technology supplied at 0.25V.

  3. LOFT fuel-rod-transient DNB probability density function studies

    International Nuclear Information System (INIS)

    Significantly improved DNB safety margins were calculated for LOFT reactor fuel rods by use of probability density functions (PDF) for transient MDNBR. Applicability and sensitivity studies determined that the PDF and resulting nominal MDNBR limits are stable, applicable over a wide range of potential input parameters, and applicable to most transients

  4. Effect of positron wave function on positron annihilation rates and electron-positron momentum densities in solids

    Energy Technology Data Exchange (ETDEWEB)

    Rubaszek, A. [Polska Akademia Nauk, Wroclaw (Poland). Inst. Niskich Temperatur i Badan Strukturalnych; Szotek, Z.; Temmerman, W.M. [Daresbury Lab. (United Kingdom)

    2001-07-01

    To interpret positron annihilation data in solids in terms of the electron momentum density and electron charge distribution, both the electron-positron interaction and positron wave function have to be considered explicitly. In the present work we discuss the effect of the shape of the positron wave function on the calculated positron annihilation rates in a variety of solids, for different types of electrons (core, s, p, d, f). We show that the form of the positron distribution in the Wigner-Seitz cell has a crucial effect on the resulting core electron contribution to the positron annihilation characteristics. The same is observed for the localised d and f electrons in transition metals Finally we study the influence of the positron wave function on the electron-positron momentum density in elemental Si. (orig.)

  5. Probability Density Estimation by Decomposition of Correlation Integral

    Czech Academy of Sciences Publication Activity Database

    Jiřina, Marcel; Jiřina jr., M.

    - : ISRST, 2008 - (Prasad, B.; Sinha, P.; Ram, A.; Kerre, E.), s. 113-119 ISBN 978-1-60651-000-1. [AIPR 2008. International Conference on Artificial Intelligence and Pattern Recognition. Orlando (US), 07.07.2008-10.07.2008] Institutional research plan: CEZ:AV0Z10300504 Keywords : correlation integral * decomposition of correlation integral * probability density estimation * polynomial approximation * classifier Subject RIV: BA - General Mathematics

  6. Probability Density Estimation by Decomposition of Correlation Integral

    Czech Academy of Sciences Publication Activity Database

    Jiřina, Marcel; Jiřina jr., M.

    -: ISRST, 2008 - (Prasad, B.; Sinha, P.; Ram, A.; Kerre, E.), s. 113-119 ISBN 978-1-60651-000-1. [AIPR 2008. International Conference on Artificial Intelligence and Pattern Recognition. Orlando (US), 07.07.2008-10.07.2008] Institutional research plan: CEZ:AV0Z10300504 Keywords : correlation integral * decomposition of correlation integral * probability density estimation * polynomial approximation * classifier Subject RIV: BA - General Mathematics

  7. LOFT fuel rod transient DNB probability density function studies

    International Nuclear Information System (INIS)

    Significantly improved calculated DNB safety margins were defined by the development and use of probability density functions (PDF) for transient MDNBR nuclear fuel rods in the Loss of Fluid Test (LOFT) reactor. Calculations for limiting transients and response surface methods were used thereby including transient interactions and trip uncertainties in the MDNBR PDF. Applicability and sensitivity studies determined that the PDF and resulting nominal MDNBR limits are stable, applicable over a wide range of potential input parameters, and applicable to most transients

  8. Vehicle Detection Based on Probability Hypothesis Density Filter

    Science.gov (United States)

    Zhang, Feihu; Knoll, Alois

    2016-01-01

    In the past decade, the developments of vehicle detection have been significantly improved. By utilizing cameras, vehicles can be detected in the Regions of Interest (ROI) in complex environments. However, vision techniques often suffer from false positives and limited field of view. In this paper, a LiDAR based vehicle detection approach is proposed by using the Probability Hypothesis Density (PHD) filter. The proposed approach consists of two phases: the hypothesis generation phase to detect potential objects and the hypothesis verification phase to classify objects. The performance of the proposed approach is evaluated in complex scenarios, compared with the state-of-the-art. PMID:27070621

  9. On singular probability densities generated by extremal dynamics

    OpenAIRE

    Garcia, Guilherme J. M.; Dickman, Ronald

    2003-01-01

    Extremal dynamics is the mechanism that drives the Bak-Sneppen model into a (self-organized) critical state, marked by a singular stationary probability density $p(x)$. With the aim of understanding this phenomenon, we study the BS model and several variants via mean-field theory and simulation. In all cases, we find that $p(x)$ is singular at one or more points, as a consequence of extremal dynamics. Furthermore we show that the extremal barrier $x_i$ always belongs to the `prohibited' inter...

  10. Vehicle Detection Based on Probability Hypothesis Density Filter.

    Science.gov (United States)

    Zhang, Feihu; Knoll, Alois

    2016-01-01

    In the past decade, the developments of vehicle detection have been significantly improved. By utilizing cameras, vehicles can be detected in the Regions of Interest (ROI) in complex environments. However, vision techniques often suffer from false positives and limited field of view. In this paper, a LiDAR based vehicle detection approach is proposed by using the Probability Hypothesis Density (PHD) filter. The proposed approach consists of two phases: the hypothesis generation phase to detect potential objects and the hypothesis verification phase to classify objects. The performance of the proposed approach is evaluated in complex scenarios, compared with the state-of-the-art. PMID:27070621

  11. Probability density function transformation using seeded localized averaging

    International Nuclear Information System (INIS)

    Seeded Localized Averaging (SLA) is a spectrum acquisition method that averages pulse-heights in dynamic windows. SLA sharpens peaks in the acquired spectra. This work investigates the transformation of the original probability density function (PDF) in the process of applying SLA procedure. We derive an analytical expression for the resulting probability density function after an application of SLA. In addition, we prove the following properties: 1) for symmetric distributions, SLA preserves both the mean and symmetry. 2) for uni-modal symmetric distributions, SLA reduces variance, sharpening the distributions peak. Our results are the first to prove these properties, reinforcing past experimental observations. Specifically, our results imply that in the typical case of a spectral peak with Gaussian PDF the full width at half maximum (FWHM) of the transformed peak becomes narrower even with averaging of only two pulse-heights. While the Gaussian shape is no longer preserved, our results include an analytical expression for the resulting distribution. Examples of the transformation of other PDFs are presented. (authors)

  12. INTERACTIVE VISUALIZATION OF PROBABILITY AND CUMULATIVE DENSITY FUNCTIONS

    KAUST Repository

    Potter, Kristin

    2012-01-01

    The probability density function (PDF), and its corresponding cumulative density function (CDF), provide direct statistical insight into the characterization of a random process or field. Typically displayed as a histogram, one can infer probabilities of the occurrence of particular events. When examining a field over some two-dimensional domain in which at each point a PDF of the function values is available, it is challenging to assess the global (stochastic) features present within the field. In this paper, we present a visualization system that allows the user to examine two-dimensional data sets in which PDF (or CDF) information is available at any position within the domain. The tool provides a contour display showing the normed difference between the PDFs and an ansatz PDF selected by the user and, furthermore, allows the user to interactively examine the PDF at any particular position. Canonical examples of the tool are provided to help guide the reader into the mapping of stochastic information to visual cues along with a description of the use of the tool for examining data generated from an uncertainty quantification exercise accomplished within the field of electrophysiology.

  13. Probability Density and CFAR Threshold Estimation for Hyperspectral Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Clark, G A

    2004-09-21

    The work reported here shows the proof of principle (using a small data set) for a suite of algorithms designed to estimate the probability density function of hyperspectral background data and compute the appropriate Constant False Alarm Rate (CFAR) matched filter decision threshold for a chemical plume detector. Future work will provide a thorough demonstration of the algorithms and their performance with a large data set. The LASI (Large Aperture Search Initiative) Project involves instrumentation and image processing for hyperspectral images of chemical plumes in the atmosphere. The work reported here involves research and development on algorithms for reducing the false alarm rate in chemical plume detection and identification algorithms operating on hyperspectral image cubes. The chemical plume detection algorithms to date have used matched filters designed using generalized maximum likelihood ratio hypothesis testing algorithms [1, 2, 5, 6, 7, 12, 10, 11, 13]. One of the key challenges in hyperspectral imaging research is the high false alarm rate that often results from the plume detector [1, 2]. The overall goal of this work is to extend the classical matched filter detector to apply Constant False Alarm Rate (CFAR) methods to reduce the false alarm rate, or Probability of False Alarm P{sub FA} of the matched filter [4, 8, 9, 12]. A detector designer is interested in minimizing the probability of false alarm while simultaneously maximizing the probability of detection P{sub D}. This is summarized by the Receiver Operating Characteristic Curve (ROC) [10, 11], which is actually a family of curves depicting P{sub D} vs. P{sub FA}parameterized by varying levels of signal to noise (or clutter) ratio (SNR or SCR). Often, it is advantageous to be able to specify a desired P{sub FA} and develop a ROC curve (P{sub D} vs. decision threshold r{sub 0}) for that case. That is the purpose of this work. Specifically, this work develops a set of algorithms and MATLAB

  14. Downward Price Rigidity of the Japanese CPI -- Analysis by Probability Density Functions and Spatial Density Functions

    OpenAIRE

    Munehisa Kasuya

    1999-01-01

    We define downward price rigidity as the state in which the speed at which prices fall is slower than that in which they rise. Based on this definition, we examine the downward price rigidity of each item that constitutes the core CPI of Japan. That is, according to the results of fractional integration tests on price changes of individual items, we estimate probability density functions in the stationary case and estimate spatial density functions in the nonstationary case. We also test thei...

  15. Can the relic density of self-interacting dark matter be due to annihilations into Standard Model particles?

    CERN Document Server

    Chu, Xiaoyong; Hambye, Thomas

    2016-01-01

    Motivated by the hypothesis that dark matter self-interactions provide a solution to the small-scale structure formation problems, we investigate the possibilities that the relic density of a self-interacting dark matter candidate can proceed from the thermal freeze-out of annihilations into Standard Model particles. We find that scalar and Majorana dark matter in the mass range of $10-500$ MeV, coupled to a slightly heavier massive gauge boson, are the only possible candidates in agreement with multiple current experimental constraints. Here dark matter annihilations take place at a much slower rate than the self-interactions simply because the interaction connecting the Standard Model and the dark matter sectors is small. We also discuss prospects of establishing or excluding these two scenarios in future experiments.

  16. Impact of SUSY-QCD corrections on neutralino-stop co-annihilation and the neutralino relic density

    Energy Technology Data Exchange (ETDEWEB)

    Harz, Julia [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Herrmann, Bjoern [Savoie Univ./CNRS, Annecy-le-Vieux (France). LAPTh; Klasen, Michael [Muenster Univ. (Germany). Inst. fuer Theoretische Physik 1; Kovarik, Karol [Karlsruher Institut fuer Technologie, Karlsruhe (Germany). Inst. fuer Theoretische Physik; Le Boulc' h, Quentin [Grenoble Univ./CNRS-IN2P3/INPG, Grenoble (France). Lab. de Physique Subatomique et de Cosmologie

    2013-02-15

    We have calculated the full O({alpha}{sub s}) supersymmetric QCD corrections to neutralino-stop coannihilation into electroweak vector and Higgs bosons within the Minimal Supersymmetric Standard Model (MSSM).We performed a parameter study within the phenomenological MSSM and demonstrated that the studied co-annihilation processes are phenomenologically relevant, especially in the context of a 126 GeV Higgs-like particle. By means of an example scenario we discuss the effect of the full next-to-leading order corrections on the co-annihilation cross section and show their impact on the predicted neutralino relic density. We demonstrate that the impact of these corrections on the cosmologically preferred region of parameter space is larger than the current experimental uncertainty of WMAP data.

  17. Interactive design of probability density functions for shape grammars

    KAUST Repository

    Dang, Minh

    2015-11-02

    A shape grammar defines a procedural shape space containing a variety of models of the same class, e.g. buildings, trees, furniture, airplanes, bikes, etc. We present a framework that enables a user to interactively design a probability density function (pdf) over such a shape space and to sample models according to the designed pdf. First, we propose a user interface that enables a user to quickly provide preference scores for selected shapes and suggest sampling strategies to decide which models to present to the user to evaluate. Second, we propose a novel kernel function to encode the similarity between two procedural models. Third, we propose a framework to interpolate user preference scores by combining multiple techniques: function factorization, Gaussian process regression, autorelevance detection, and l1 regularization. Fourth, we modify the original grammars to generate models with a pdf proportional to the user preference scores. Finally, we provide evaluations of our user interface and framework parameters and a comparison to other exploratory modeling techniques using modeling tasks in five example shape spaces: furniture, low-rise buildings, skyscrapers, airplanes, and vegetation.

  18. Parameterizing deep convection using the assumed probability density function method

    Energy Technology Data Exchange (ETDEWEB)

    Storer, R. L.; Griffin, B. M.; Hoft, Jan; Weber, J. K.; Raut, E.; Larson, Vincent E.; Wang, Minghuai; Rasch, Philip J.

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.

  19. Numerical methods for high-dimensional probability density function equations

    Science.gov (United States)

    Cho, H.; Venturi, D.; Karniadakis, G. E.

    2016-01-01

    In this paper we address the problem of computing the numerical solution to kinetic partial differential equations involving many phase variables. These types of equations arise naturally in many different areas of mathematical physics, e.g., in particle systems (Liouville and Boltzmann equations), stochastic dynamical systems (Fokker-Planck and Dostupov-Pugachev equations), random wave theory (Malakhov-Saichev equations) and coarse-grained stochastic systems (Mori-Zwanzig equations). We propose three different classes of new algorithms addressing high-dimensionality: The first one is based on separated series expansions resulting in a sequence of low-dimensional problems that can be solved recursively and in parallel by using alternating direction methods. The second class of algorithms relies on truncation of interaction in low-orders that resembles the Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) framework of kinetic gas theory and it yields a hierarchy of coupled probability density function equations. The third class of algorithms is based on high-dimensional model representations, e.g., the ANOVA method and probabilistic collocation methods. A common feature of all these approaches is that they are reducible to the problem of computing the solution to high-dimensional equations via a sequence of low-dimensional problems. The effectiveness of the new algorithms is demonstrated in numerical examples involving nonlinear stochastic dynamical systems and partial differential equations, with up to 120 variables.

  20. Parameterizing deep convection using the assumed probability density function method

    Directory of Open Access Journals (Sweden)

    R. L. Storer

    2014-06-01

    Full Text Available Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.

  1. The stationary probability density of a class of bounded Markov processes

    OpenAIRE

    Ramli, Muhamad Azfar; Leng, Gerard

    2010-01-01

    In this paper we generalize a bounded Markov process, described by Stoyanov and Pacheco-González for a class of transition probability functions. A recursive integral equation for the probability density of these bounded Markov processes is derived and the stationary probability density is obtained by solving an equivalent differential equation. Examples of stationary densities for different transition probability functions are given and an application for designing a roboti...

  2. Fitting age-specific fertility rates by a skew-symmetric probability density function

    OpenAIRE

    Mazzuco, Stefano; Scarpa, Bruno

    2011-01-01

    Mixture probability density functions had recently been proposed to describe some fertility patterns characterized by a bi-modal shape. These mixture probability density functions appear to be adequate when the fertility pattern is actually bi-modal but less useful when the shape of age-specific fertility rates is unimodal. A further model is proposed based on skew-symmetric probability density functions. This model is both more parsimonious than mixture distributions and more flexible, sh...

  3. Power-law tails in probability density functions of molecular cloud column density

    CERN Document Server

    Brunt, Chris

    2015-01-01

    Power-law tails are often seen in probability density functions (PDFs) of molecular cloud column densities, and have been attributed to the effect of gravity. We show that extinction PDFs of a sample of five molecular clouds obtained at a few tenths of a parsec resolution, probing extinctions up to A$_{{\\mathrm{V}}}$ $\\sim$ 10 magnitudes, are very well described by lognormal functions provided that the field selection is tightly constrained to the cold, molecular zone and that noise and foreground contamination are appropriately accounted for. In general, field selections that incorporate warm, diffuse material in addition to the cold, molecular material will display apparent core+tail PDFs. The apparent tail, however, is best understood as the high extinction part of a lognormal PDF arising from the cold, molecular part of the cloud. We also describe the effects of noise and foreground/background contamination on the PDF structure, and show that these can, if not appropriately accounted for, induce spurious ...

  4. Estimation of Extreme Response and Failure Probability of Wind Turbines under Normal Operation using Probability Density Evolution Method

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Liu, W. F.;

    2013-01-01

    Estimation of extreme response and failure probability of structures subjected to ultimate design loads is essential for structural design of wind turbines according to the new standard IEC61400-1. This task is focused on in the present paper in virtue of probability density evolution method (PDEM......), which underlies the schemes of random vibration analysis and structural reliability assessment. The short-term rare failure probability of 5-mega-watt wind turbines, for illustrative purposes, in case of given mean wind speeds and turbulence levels is investigated through the scheme of extreme value...... distribution instead of any other approximate schemes of fitted distribution currently used in statistical extrapolation techniques. Besides, the comparative studies against the classical fitted distributions and the standard Monte Carlo techniques are carried out. Numerical results indicate that PDEM exhibits...

  5. On the discretization of probability density functions and the continuous Rényi entropy

    Indian Academy of Sciences (India)

    Diógenes Campos

    2015-12-01

    On the basis of second mean-value theorem (SMVT) for integrals, a discretization method is proposed with the aim of representing the expectation value of a function with respect to a probability density function in terms of the discrete probability theory. This approach is applied to the continuous Rényi entropy, and it is established that a discrete probability distribution can be associated to it in a very natural way. The probability density functions for the linear superposition of two coherent states is used for developing a representative example.

  6. Microdefects and electron densities in NiTi shape memory alloys studied by positron annihilation

    Institute of Scientific and Technical Information of China (English)

    HU Yi-feng; DENG Wen; HAO Wen-bo; YUE Li; HUANG Le; HUANG Yu-yang; XIONG Liang-yue

    2006-01-01

    The microdefects and free electron densities in B2, R and B19' phases of Nis0.78Ti49.22 alloy were studied by positron lifetime measurements. Comparing the lifetime parameters of the Nis0.78Ti49.22 alloy measured at 295 K and 225 K, it is found that the free electron density of the R phase is lower than that of the B2 phase; the open volume of the defects of the R phase is larger, while the concentration of these defects is lower than that of the B2 phase. The Nis0.78Ti49.22 alloy exhibits B19' phase at 115 K. In comparison with the R phase, the free electron density of the B19' phase increases, the open volume of the defects of the B19' phase reduces, and the concentration of these defects increases. The microdefects and the free electron density play an important role during the multi-step transformations (B2→R→B19' phase transformations) in Nis0.78Ti49.22 alloy with the decrease of temperature.

  7. Superposition rule and entanglement in diagonal and probability representations of density states

    OpenAIRE

    Man'ko, Vladimir I.; Marmo, Giuseppe; Sudarshan, E C George

    2009-01-01

    The quasidistributions corresponding to the diagonal representation of quantum states are discussed within the framework of operator-symbol construction. The tomographic-probability distribution describing the quantum state in the probability representation of quantum mechanics is reviewed. The connection of the diagonal and probability representations is discussed. The superposition rule is considered in terms of the density-operator symbols. The separability and entanglement properties of m...

  8. Design of companding quantizer for Laplacian source using the approximation of probability density function

    OpenAIRE

    Velimirovic, Lazar; Peric, Zoran; Stankovic, Miomir; Simic, Nikola

    2012-01-01

    In this paper both piecewise linear and piecewise uniform approximation of probability density function are performed. For the probability density function approximated in these ways, a compressor function is formed. On the basis of compressor function formed in this way, piecewise linear and piecewise uniform companding quantizer are designed. Design of these companding quantizer models is performed for the Laplacian source at the entrance of the quantizer. The performance estimate of the pr...

  9. Moment-independent importance measure of basic random variable and its probability density evolution solution

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    To analyze the effect of basic variable on failure probability in reliability analysis,a moment-independent importance measure of the basic random variable is proposed,and its properties are analyzed and verified.Based on this work,the importance measure of the basic variable on the failure probability is compared with that on the distribution density of the response.By use of the probability density evolution method,a solution is established to solve two importance measures,which can efficiently avoid the difficulty in solving the importance measures.Some numerical examples and engineering examples are used to demonstrate the proposed importance measure on the failure probability and that on the distribution density of the response.The results show that the proposed importance measure can effectively describe the effect of the basic variable on the failure probability from the distribution density of the basic variable.Additionally,the results show that the established solution on the probability density evolution is efficient for the importance measures.

  10. A note on the existence of transition probability densities for L\\'evy processes

    OpenAIRE

    Knopova, V.; Schilling, R.L.

    2010-01-01

    We prove several necessary and sufficient conditions for the existence of (smooth) transition probability densities for L\\'evy processes and isotropic L\\'evy processes. Under some mild conditions on the characteristic exponent we calculate the asymptotic behaviour of the transition density as $t\\to 0$ and $t\\to\\infty$ and show a ratio-limit theorem.

  11. The Influence of Phonotactic Probability and Neighborhood Density on Children's Production of Newly Learned Words

    Science.gov (United States)

    Heisler, Lori; Goffman, Lisa

    2016-01-01

    A word learning paradigm was used to teach children novel words that varied in phonotactic probability and neighborhood density. The effects of frequency and density on speech production were examined when phonetic forms were nonreferential (i.e., when no referent was attached) and when phonetic forms were referential (i.e., when a referent was…

  12. Linearized Controller Design for the Output Probability Density Functions of Non-Gaussian Stochastic Systems

    Institute of Scientific and Technical Information of China (English)

    Pousga Kabore; Husam Baki; Hong Yue; Hong Wang

    2005-01-01

    This paper presents a linearized approach for the controller design of the shape of output probability density functions for general stochastic systems. A square root approximation to an output probability density function is realized by a set of B-spline functions. This generally produces a nonlinear state space model for the weights of the B-spline approximation. A linearized model is therefore obtained and embedded into a performance function that measures the tracking error of the output probability density function with respect to a given distribution. By using this performance function as a Lyapunov function for the closed loop system, a feedback control input has been obtained which guarantees closed loop stability and realizes perfect tracking. The algorithm described in this paper has been tested on a simulated example and desired results have been achieved.

  13. Joint Delay Doppler Probability Density Functions for Air-to-Air Channels

    Directory of Open Access Journals (Sweden)

    Michael Walter

    2014-01-01

    Full Text Available Recent channel measurements indicate that the wide sense stationary uncorrelated scattering assumption is not valid for air-to-air channels. Therefore, purely stochastic channel models cannot be used. In order to cope with the nonstationarity a geometric component is included. In this paper we extend a previously presented two-dimensional geometric stochastic model originally developed for vehicle-to-vehicle communication to a three-dimensional air-to-air channel model. Novel joint time-variant delay Doppler probability density functions are presented. The probability density functions are derived by using vector calculus and parametric equations of the delay ellipses. This allows us to obtain closed form mathematical expressions for the probability density functions, which can then be calculated for any delay and Doppler frequency at arbitrary times numerically.

  14. Density probability distribution functions of diffuse gas in the Milky Way

    CERN Document Server

    Berkhuijsen, E M

    2008-01-01

    In a search for the signature of turbulence in the diffuse interstellar medium in gas density distributions, we determined the probability distribution functions (PDFs) of the average volume densities of the diffuse gas. The densities were derived from dispersion measures and HI column densities towards pulsars and stars at known distances. The PDFs of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b|=5 degrees are considered separately. The PDF of at high |b| is twice as wide as that at low |b|. The width of the PDF of the DIG is about 30 per cent smaller than that of the warm HI at the same latitudes. The results reported here provide strong support for the existence of a lognormal density PDF in the diffuse ISM, consistent with a turbulent origin of density structure in the diffuse gas.

  15. Modelling the Probability Density Function of IPTV Traffic Packet Delay Variation

    Directory of Open Access Journals (Sweden)

    Michal Halas

    2012-01-01

    Full Text Available This article deals with modelling the Probability density function of IPTV traffic packet delay variation. The use of this modelling is in an efficient de-jitter buffer estimation. When an IP packet travels across a network, it experiences delay and its variation. This variation is caused by routing, queueing systems and other influences like the processing delay of the network nodes. When we try to separate these at least three types of delay variation, we need a way to measure these types separately. This work is aimed to the delay variation caused by queueing systems which has the main implications to the form of the Probability density function.

  16. Density matrix equation analysis of optical–optical double-resonance multiphoton ionization probability

    International Nuclear Information System (INIS)

    An analytical formula of the optical–optical double-resonance multi-photon ionization (OODR-MPI) probability is derived from the time-dependent density-matrix equations that describe the interaction of photon and material. Based on the formula, the variation of the multiphoton ionization (MPI) probability with laser resonance detuning, Rabi frequency, laser pulse duration and ionization rate is investigated theoretically. It is shown that the MPI probability will decrease with the increase of laser resonance detuning, to some extent, to zero. The influence of the pump laser resonance detuning on the ionization probability is more important with respect to the probe laser. It not only influences Rabi frequency for saturation, but also the saturation value of MPI probability. The MPI probability will increase with Rabi frequency, laser pulse duration and ionization rate. It is also found that though the variation of the populations in the ground, the first and the second resonance states is different at the beginning of laser radiation, but they will still decrease to zero as the time goes on. It is then that the ionization probability gets the maximum value. Thus long laser pulse duration and high laser intensity are in favor for improving the MPI probability. These theoretical research results can provide a useful guide for the practical application of OODR-MPI spectroscopy. - Highlights: • An analytical expression of OODR-MPI probability has been derived. • MPI probability decreases with the increase of laser resonance detuning. • The influence of pump laser on the MPI probability is larger than probe laser. • Larger laser pulse duration and intensity are in favor of higher MPI probability

  17. Unification of Field Theory and Maximum Entropy Methods for Learning Probability Densities

    CERN Document Server

    Kinney, Justin B

    2014-01-01

    Bayesian field theory and maximum entropy are two methods for learning smooth probability distributions (a.k.a. probability densities) from finite sampled data. Both methods were inspired by statistical physics, but the relationship between them has remained unclear. Here I show that Bayesian field theory subsumes maximum entropy density estimation. In particular, the most common maximum entropy methods are shown to be limiting cases of Bayesian inference using field theory priors that impose no boundary conditions on candidate densities. This unification provides a natural way to test the validity of the maximum entropy assumption on one's data. It also provides a better-fitting nonparametric density estimate when the maximum entropy assumption is rejected.

  18. Reconstruction of Neutral Hydrogen Density Profiles in HANBIT Magnetic Mirror Device Using Bayesian Probability Theory

    International Nuclear Information System (INIS)

    Hydrogen is the main constitute of plasmas in HANBIT magnetic mirror device, therefore, measurement of the emission from excited levels of hydrogen atoms is an important diagnostic tool. From the emissivity of Hα radiation one can derive quantities such as the neutral hydrogen density and the source rate. An unbiased and consistent probability theory based approach within the framework of Bayesian inference is applied to the reconstruction of Hα emissivity profiles and hydrogen neutral density profiles in HANBIT magnetic mirror device

  19. Compound kernel estimates for the transition probability density of a L\\'evy process in $\\rn$

    OpenAIRE

    Knopova, V.

    2013-01-01

    We construct in the small-time setting the upper and lower estimates for the transition probability density of a L\\'evy process in $\\rn$. Our approach relies on the complex analysis technique and the asymptotic analysis of the inverse Fourier transform of the characteristic function of the respective process.

  20. Influence of Phonotactic Probability/Neighbourhood Density on Lexical Learning in Late Talkers

    Science.gov (United States)

    MacRoy-Higgins, Michelle; Schwartz, Richard G.; Shafer, Valerie L.; Marton, Klara

    2013-01-01

    Background: Toddlers who are late talkers demonstrate delays in phonological and lexical skills. However, the influence of phonological factors on lexical acquisition in toddlers who are late talkers has not been examined directly. Aims: To examine the influence of phonotactic probability/neighbourhood density on word learning in toddlers who were…

  1. Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function

    Science.gov (United States)

    Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.

    2011-01-01

    In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.

  2. Kernel density estimation and marginalized-particle based probability hypothesis density filter for multi-target tracking

    Institute of Scientific and Technical Information of China (English)

    张路平; 王鲁平; 李飚; 赵明

    2015-01-01

    In order to improve the performance of the probability hypothesis density (PHD) algorithm based particle filter (PF) in terms of number estimation and states extraction of multiple targets, a new probability hypothesis density filter algorithm based on marginalized particle and kernel density estimation is proposed, which utilizes the idea of marginalized particle filter to enhance the estimating performance of the PHD. The state variables are decomposed into linear and non-linear parts. The particle filter is adopted to predict and estimate the nonlinear states of multi-target after dimensionality reduction, while the Kalman filter is applied to estimate the linear parts under linear Gaussian condition. Embedding the information of the linear states into the estimated nonlinear states helps to reduce the estimating variance and improve the accuracy of target number estimation. The meanshift kernel density estimation, being of the inherent nature of searching peak value via an adaptive gradient ascent iteration, is introduced to cluster particles and extract target states, which is independent of the target number and can converge to the local peak position of the PHD distribution while avoiding the errors due to the inaccuracy in modeling and parameters estimation. Experiments show that the proposed algorithm can obtain higher tracking accuracy when using fewer sampling particles and is of lower computational complexity compared with the PF-PHD.

  3. The Density Probability Distribution in Compressible Isothermal Turbulence: Solenoidal vs Compressive Forcing

    CERN Document Server

    Federrath, Christoph; Schmidt, Wolfram

    2008-01-01

    The probability density function (PDF) of the gas density in turbulent supersonic flows is investigated with high-resolution numerical simulations. In a systematic study, we compare the density statistics of compressible turbulence driven by the usually adopted solenoidal forcing (divergence-free) and by compressive forcing (curl-free). Our results are in agreement with studies using solenoidal forcing. However, compressive forcing yields a significantly broader density distribution with standard deviation ~3 times larger at the same rms Mach number. The standard deviation-Mach number relation used in analytical models of star formation is reviewed and a modification of the existing expression is proposed, which takes into account the ratio of solenoidal and compressive modes of the turbulence forcing.

  4. Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows

    Science.gov (United States)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.

  5. Analytical Formulation of the Single-visit Completeness Joint Probability Density Function

    Science.gov (United States)

    Garrett, Daniel; Savransky, Dmitry

    2016-09-01

    We derive an exact formulation of the multivariate integral representing the single-visit obscurational and photometric completeness joint probability density function for arbitrary distributions for planetary parameters. We present a derivation of the region of nonzero values of this function, which extends previous work, and discuss the time and computational complexity costs and benefits of the method. We present a working implementation and demonstrate excellent agreement between this approach and Monte Carlo simulation results.

  6. A new formulation of the probability density function in random walk models for atmospheric dispersion

    DEFF Research Database (Denmark)

    Falk, Anne Katrine Vinther; Gryning, Sven-Erik

    In this model for atmospheric dispersion particles are simulated by the Langevin Equation, which is a stochastic differential equation. It uses the probability density function (PDF) of the vertical velocity fluctuations as input. The PDF is constructed as an expansion after Hermite polynomials. In...... several previous works where the PDF was expressed this way, further use was hampered by the fact that the PDF takes negative values for a range of velocities. This problem is overcome in the present formulation...

  7. Probability Density Components Analysis: A New Approach to Treatment and Classification of SAR Images

    OpenAIRE

    Osmar Abílio de Carvalho Júnior; Luz Marilda de Moraes Maciel; Ana Paula Ferreira de Carvalho; Renato Fontes Guimarães; Cristiano Rosa Silva; Roberto Arnaldo Trancoso Gomes; Nilton Correia Silva

    2014-01-01

    Speckle noise (salt and pepper) is inherent to synthetic aperture radar (SAR), which causes a usual noise-like granular aspect and complicates the image classification. In SAR image analysis, the spatial information might be a particular benefit for denoising and mapping classes characterized by a statistical distribution of the pixel intensities from a complex and heterogeneous spectral response. This paper proposes the Probability Density Components Analysis (PDCA), a new alternative that c...

  8. Analytical formulation of the single-visit completeness joint probability density function

    CERN Document Server

    Garrett, Daniel

    2016-01-01

    We derive an exact formulation of the multivariate integral representing the single-visit obscurational and photometric completeness joint probability density function for arbitrary distributions for planetary parameters. We present a derivation of the region of nonzero values of this function which extends previous work, and discuss time and computational complexity costs and benefits of the method. We present a working implementation, and demonstrate excellent agreement between this approach and Monte Carlo simulation results

  9. Energy Quantization and Probability Density of Electron in Intense-Field-Atom Interactions

    Institute of Scientific and Technical Information of China (English)

    敖淑艳; 程太旺; 李晓峰; 吴令安; 付盘铭

    2003-01-01

    We find that, due to the quantum correlation between the electron and the field, the electronic energy becomes quantized also, manifesting the particle aspect of light in the electron-light interaction. The probability amplitude of finding electron with a given energy is given by a generalized Bessel function, which can be represented as a coherent superposition of contributions from a few electronic quantum trajectories. This concept is illustrated by comparing the spectral density of the electron with the laser assisted recombination spectrum.

  10. Effect of Bias Estimation on Coverage Accuracy of Bootstrap Confidence Intervals for a Probability Density

    OpenAIRE

    Hall, Peter

    1992-01-01

    The bootstrap is a poor estimator of bias in problems of curve estimation, and so bias must be corrected by other means when the bootstrap is used to construct confidence intervals for a probability density. Bias may either be estimated explicitly, or allowed for by undersmoothing the curve estimator. Which of these two approaches is to be preferred? In the present paper we address this question from the viewpoint of coverage accuracy, assuming a given number of derivatives of the unknown den...

  11. On the reliability of observational measurements of column density probability distribution functions

    CERN Document Server

    Ossenkopf, Volker; Schneider, Nicola; Federrath, Christoph; Klessen, Ralf S

    2016-01-01

    Probability distribution functions (PDFs) of column densities are an established tool to characterize the evolutionary state of interstellar clouds. Using simulations, we show to what degree their determination is affected by noise, line-of-sight contamination, field selection, and the incomplete sampling in interferometric measurements. We solve the integrals that describe the convolution of a cloud PDF with contaminating sources and study the impact of missing information on the measured column density PDF. The effect of observational noise can be easily estimated and corrected for if the root mean square (rms) of the noise is known. For $\\sigma_{noise}$ values below 40\\,\\% of the typical cloud column density, $N_{peak}$, this involves almost no degradation of the accuracy of the PDF parameters. For higher noise levels and narrow cloud PDFs the width of the PDF becomes increasingly uncertain. A contamination by turbulent foreground or background clouds can be removed as a constant shield if the PDF of the c...

  12. The effects of urbanization on population density, occupancy, and detection probability of wild felids.

    Science.gov (United States)

    Lewis, Jesse S; Logan, Kenneth A; Alldredge, Mat W; Bailey, Larissa L; VandeWoude, Sue; Crooks, Kevin R

    2015-10-01

    Urbanization is a primary driver of landscape conversion, with far-reaching effects on landscape pattern and process, particularly related to the population characteristics of animals. Urbanization can alter animal movement and habitat quality, both of which can influence population abundance and persistence. We evaluated three important population characteristics (population density, site occupancy, and species detection probability) of a medium-sized and a large carnivore across varying levels of urbanization. Specifically, we studied bobcat and puma populations across wildland, exurban development, and wildland-urban interface (WUI) sampling grids to test hypotheses evaluating how urbanization affects wild felid populations and their prey. Exurban development appeared to have a greater impact on felid populations than did habitat adjacent to a major urban area (i.e., WUI); estimates of population density for both bobcats and pumas were lower in areas of exurban development compared to wildland areas, whereas population density was similar between WUI and wildland habitat. Bobcats and pumas were less likely to be detected in habitat as the amount of human disturbance associated with residential development increased at a site, which was potentially related to reduced habitat quality resulting from urbanization. However, occupancy of both felids was similar between grids in both study areas, indicating that this population metric was less sensitive than density. At the scale of the sampling grid, detection probability for bobcats in urbanized habitat was greater than in wildland areas, potentially due to restrictive movement corridors and funneling of animal movements in landscapes influenced by urbanization. Occupancy of important felid prey (cottontail rabbits and mule deer) was similar across levels of urbanization, although elk occupancy was lower in urbanized areas. Our study indicates that the conservation of medium- and large-sized felids associated with

  13. Spectral discrete probability density function of measured wind turbine noise in the far field.

    Science.gov (United States)

    Ashtiani, Payam; Denison, Adelaide

    2015-01-01

    Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources. PMID:25905097

  14. Spectral discrete probability density function of measured wind turbine noise in the far field

    Directory of Open Access Journals (Sweden)

    Payam eAshtiani

    2015-04-01

    Full Text Available Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for 1/3rd Octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low frequency noise sources.

  15. A unified optical damage criterion based on the probability density distribution of detector signals

    Science.gov (United States)

    Somoskoi, T.; Vass, Cs.; Mero, M.; Mingesz, R.; Bozoki, Z.; Osvay, K.

    2013-11-01

    Various methods and procedures have been developed so far to test laser induced optical damage. The question naturally arises, that what are the respective sensitivities of these diverse methods. To make a suitable comparison, both the processing of the measured primary signal has to be at least similar to the various methods, and one needs to establish a proper damage criterion, which has to be universally applicable for every method. We defined damage criteria based on the probability density distribution of the obtained detector signals. This was determined by the kernel density estimation procedure. We have tested the entire evaluation procedure in four well-known detection techniques: direct observation of the sample by optical microscopy; monitoring of the change in the light scattering power of the target surface and the detection of the generated photoacoustic waves both in the bulk of the sample and in the surrounding air.

  16. Probability density function formalism for optical coherence tomography signal analysis: a controlled phantom study.

    Science.gov (United States)

    Weatherbee, Andrew; Sugita, Mitsuro; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex

    2016-06-15

    The distribution of backscattered intensities as described by the probability density function (PDF) of tissue-scattered light contains information that may be useful for tissue assessment and diagnosis, including characterization of its pathology. In this Letter, we examine the PDF description of the light scattering statistics in a well characterized tissue-like particulate medium using optical coherence tomography (OCT). It is shown that for low scatterer density, the governing statistics depart considerably from a Gaussian description and follow the K distribution for both OCT amplitude and intensity. The PDF formalism is shown to be independent of the scatterer flow conditions; this is expected from theory, and suggests robustness and motion independence of the OCT amplitude (and OCT intensity) PDF metrics in the context of potential biomedical applications. PMID:27304274

  17. Joint probability density function of the stochastic responses of nonlinear structures

    Institute of Scientific and Technical Information of China (English)

    Chen Jianbing; Li Jie

    2007-01-01

    The joint probability density function (PDF) of different structural responses is a very important topic in the stochastic response analysis of nonlinear structures. In this paper, the probability density evolution method, which is successfully developed to capture the instantaneous PDF of an arbitrary single response of interest, is extended to evaluate the joint PDF of any two responses. A two-dimensional partial differential equation in terms of the joint PDF is established.The strategy of selecting representative points via the number theoretical method and sieved by a hyper-ellipsoid is outlined.A two-dimensional difference scheme is developed. The free vibration of an SDOF system is examined to verify the proposed method, and a frame structure exhibiting hysteresis subjected to stochastic ground motion is investigated. It is pointed out that the correlation of different responses results from the fact that randomness of different responses comes from the same set of basic random parameters involved. In other words, the essence of the probabilistic correlation is a physical correlation.

  18. Analysis of Observation Data of Earth-Rockfill Dam Based on Cloud Probability Distribution Density Algorithm

    Directory of Open Access Journals (Sweden)

    Han Liwei

    2014-07-01

    Full Text Available Monitoring data on an earth-rockfill dam constitutes a form of spatial data. Such data include much uncertainty owing to the limitation of measurement information, material parameters, load, geometry size, initial conditions, boundary conditions and the calculation model. So the cloud probability density of the monitoring data must be addressed. In this paper, the cloud theory model was used to address the uncertainty transition between the qualitative concept and the quantitative description. Then an improved algorithm of cloud probability distribution density based on a backward cloud generator was proposed. This was used to effectively convert certain parcels of accurate data into concepts which can be described by proper qualitative linguistic values. Such qualitative description was addressed as cloud numerical characteristics-- {Ex, En, He}, which could represent the characteristics of all cloud drops. The algorithm was then applied to analyze the observation data of a piezometric tube in an earth-rockfill dam. And experiment results proved that the proposed algorithm was feasible, through which, we could reveal the changing regularity of piezometric tube’s water level. And the damage of the seepage in the body was able to be found out.

  19. Annihilating Asymmetric Dark Matter

    CERN Document Server

    Bell, Nicole F; Shoemaker, Ian M

    2014-01-01

    The relic abundance of particle and antiparticle dark matter (DM) need not be vastly different in thermal asymmetric dark matter (ADM) models. By considering the effect of a primordial asymmetry on the thermal Boltzmann evolution of coupled DM and anti-DM, we derive the requisite annihilation cross section. This is used in conjunction with CMB and Fermi-LAT gamma-ray data to impose a limit on the number density of anti-DM particles surviving thermal freeze-out. When the extended gamma-ray emission from the Galactic Center is reanalyzed in a thermal ADM framework, we find that annihilation into $\\tau$ leptons prefer anti-DM number densities 1-4$\\%$ that of DM while the $b$-quark channel prefers 50-100$\\%$.

  20. Positron annihilation

    International Nuclear Information System (INIS)

    The main work in the annihilation of positrons at Harwell (UK) has been the application of the technique of technological problems to do with the effects of radiation damage and mechanical phenomena, such as fatigue and creep, on the properties of materials. Three experimental techniques for studying positron annihilation in solids are documented in this article. Nuclear pulse counting methods are being used, also angular correlation and the Doppler method. The irradiation of metals and alloys with fast neutrons at high temperatures in a reactor can cause voids to develop in the material. Defects are also produced by the plastic deformation of metals and alloys. It opens up the possibility of using positron annihilation as a practical non-destructive tool to assess mechanical damage in materials. Harwell has also been able to make measurements on the inside surface of a hole in a metal sample and on variously-shaped notched and cracked test pieces, which means that it is possible to apply the technique to engineering components

  1. Momentum Probabilities for a Single Quantum Particle in Three-Dimensional Regular "Infinite" Wells: One Way of Promoting Understanding of Probability Densities

    Science.gov (United States)

    Riggs, Peter J.

    2013-01-01

    Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…

  2. Estimates of density, detection probability, and factors influencing detection of burrowing owls in the Mojave Desert

    Science.gov (United States)

    Crowe, D.E.; Longshore, K.M.

    2010-01-01

    We estimated relative abundance and density of Western Burrowing Owls (Athene cunicularia hypugaea) at two sites in the Mojave Desert (200304). We made modifications to previously established Burrowing Owl survey techniques for use in desert shrublands and evaluated several factors that might influence the detection of owls. We tested the effectiveness of the call-broadcast technique for surveying this species, the efficiency of this technique at early and late breeding stages, and the effectiveness of various numbers of vocalization intervals during broadcasting sessions. Only 1 (3) of 31 initial (new) owl responses was detected during passive-listening sessions. We found that surveying early in the nesting season was more likely to produce new owl detections compared to surveying later in the nesting season. New owls detected during each of the three vocalization intervals (each consisting of 30 sec of vocalizations followed by 30 sec of silence) of our broadcasting session were similar (37, 40, and 23; n 30). We used a combination of detection trials (sighting probability) and double-observer method to estimate the components of detection probability, i.e., availability and perception. Availability for all sites and years, as determined by detection trials, ranged from 46.158.2. Relative abundance, measured as frequency of occurrence and defined as the proportion of surveys with at least one owl, ranged from 19.232.0 for both sites and years. Density at our eastern Mojave Desert site was estimated at 0.09 ?? 0.01 (SE) owl territories/km2 and 0.16 ?? 0.02 (SE) owl territories/km2 during 2003 and 2004, respectively. In our southern Mojave Desert site, density estimates were 0.09 ?? 0.02 (SE) owl territories/km2 and 0.08 ?? 0.02 (SE) owl territories/km 2 during 2004 and 2005, respectively. ?? 2010 The Raptor Research Foundation, Inc.

  3. Multiple-streaming and the Probability Distribution of Density in Redshift Space

    CERN Document Server

    Hui, L; Shandarin, S F; Hui, Lam; Kofman, Lev; Shandarin, Sergei F.

    1999-01-01

    We examine several aspects of redshift distortions by expressing the redshift-space density in terms of the eigenvalues and orientation of the local Lagrangian deformation tensor. We explore the importance of multiple-streaming using the Zel'dovich approximation (ZA), and compute the average number of streams in real and redshift-space. It is found that multiple-streaming can be significant in redshift-space but negligible in real-space, even at moderate values of the linear fluctuation amplitude ($\\sigma < 1$). Moreover, unlike their real-space counter-parts, redshift-space multiple-streams can flow past each other with minimal interactions. Such nonlinear redshift-space effects, which operate even when the real-space density field is quite linear, could suppress the classic compression of redshift-structures predicted by linear theory (Kaiser 1987). We also compute using the ZA the probability distribution function (PDF) of density, as well as $S_3$, in real and redshift-space, and compare it with the PD...

  4. PDE-Foam-A probability density estimation method using self-adapting phase-space binning

    International Nuclear Information System (INIS)

    Probability density estimation (PDE) is a multi-variate discrimination technique based on sampling signal and background densities defined by event samples from data or Monte-Carlo (MC) simulations in a multi-dimensional phase space. In this paper, we present a modification of the PDE method that uses a self-adapting binning method to divide the multi-dimensional phase space in a finite number of hyper-rectangles (cells). The binning algorithm adjusts the size and position of a predefined number of cells inside the multi-dimensional phase space, minimising the variance of the signal and background densities inside the cells. The implementation of the binning algorithm (PDE-Foam) is based on the MC event-generation package Foam. We present performance results for representative examples (toy models) and discuss the dependence of the obtained results on the choice of parameters. The new PDE-Foam shows improved classification capability for small training samples and reduced classification time compared to the original PDE method based on range searching.

  5. Population statistics of beamed sources-III: Intrinsic probability density functions in the time domain

    CERN Document Server

    Liodakis, I

    2015-01-01

    In a companion paper we have constructed a new statistical model for blazar populations, which reproduces the apparent velocity and redshift distributions from the MOJAVE survey while assuming single power law distributions for the Lorentz factors and the unbeamed monochromatic radio luminosity. Treating two separate cases, one for the BL Lac objects (BL Lacs) and one for the Flat Spectrum Radio Quasars (FSRQs), we calculated the distribution of the timescale modulation factor $\\Delta t'/\\Delta t$ which quantifies the change in observed timescales compared to the rest-frame ones due to redshift and relativistic compression. We found that $\\Delta t'/\\Delta t$ follows an exponential distribution with a mean depending on the flux limit of the sample, for both classes. In this work we produce the mathematical formalism that allows us to use this information in order to uncover the underlining rest-frame probability density function (PDF) of observable/measurable timescales of blazar jets, by fitting their observe...

  6. Particle filters for probability hypothesis density filter with the presence of unknown measurement noise covariance

    Institute of Scientific and Technical Information of China (English)

    Wu Xinhui; Huang Gaoming; Gao Jun

    2013-01-01

    In Bayesian multi-target filtering, knowledge of measurement noise variance is very important. Significant mismatches in noise parameters will result in biased estimates. In this paper, a new particle filter for a probability hypothesis density (PHD) filter handling unknown measure-ment noise variances is proposed. The approach is based on marginalizing the unknown parameters out of the posterior distribution by using variational Bayesian (VB) methods. Moreover, the sequential Monte Carlo method is used to approximate the posterior intensity considering non-lin-ear and non-Gaussian conditions. Unlike other particle filters for this challenging class of PHD fil-ters, the proposed method can adaptively learn the unknown and time-varying noise variances while filtering. Simulation results show that the proposed method improves estimation accuracy in terms of both the number of targets and their states.

  7. ANNz2 - Photometric redshift and probability density function estimation using machine learning methods

    CERN Document Server

    Sadeh, Iftach; Lahav, Ofer

    2015-01-01

    We present ANNz2, a new implementation of the public software for photometric redshift (photo-z) estimation of Collister and Lahav (2004). Large photometric galaxy surveys are important for cosmological studies, and in particular for characterizing the nature of dark energy. The success of such surveys greatly depends on the ability to measure photo-zs, based on limited spectral data. ANNz2 utilizes multiple machine learning methods, such as artificial neural networks, boosted decision/regression trees and k-nearest neighbours. The objective of the algorithm is to dynamically optimize the performance of the photo-z estimation, and to properly derive the associated uncertainties. In addition to single-value solutions, the new code also generates full probability density functions (PDFs) in two different ways. In addition, estimators are incorporated to mitigate possible problems of spectroscopic training samples which are not representative or are incomplete. ANNz2 is also adapted to provide optimized solution...

  8. Occupation probabilities and current densities of bulk and edge states of a Floquet topological insulator

    Science.gov (United States)

    Dehghani, Hossein; Mitra, Aditi

    2016-05-01

    Results are presented for the occupation probabilities and current densities of bulk and edge states of half-filled graphene in a cylindrical geometry and irradiated by a circularly polarized laser. It is assumed that the system is closed and that the laser has been switched on as a quench. Laser parameters corresponding to some representative topological phases are studied: one where the Chern number of the Floquet bands equals the number of chiral edge modes, a second where anomalous edge states appear in the Floquet Brillouin zone boundaries, and a third where the Chern number is zero, yet topological edge states appear at the center and boundaries of the Floquet Brillouin zone. Qualitative differences are found for the high-frequency off-resonant and low-frequency on-resonant laser with edge states arising due to resonant processes occupied with a high effective temperature on the one hand, while edge states arising due to off-resonant processes occupied with a low effective temperature on the other. For an ideal half-filled system where only one of the bands in the Floquet Brillouin zone is occupied and the other empty, particle-hole and inversion symmetry of the Floquet Hamiltonian implies zero current density. However the laser switch-on protocol breaks the inversion symmetry, resulting in a net cylindrical sheet of current density at steady state. Due to the underlying chirality of the system, this current density profile is associated with a net charge imbalance between the top and bottom of the cylinders.

  9. Calculation of probability density functions for temperature and precipitation change under global warming

    International Nuclear Information System (INIS)

    Full text: he IPCC Fourth Assessment Report (Meehl ef al. 2007) presents multi-model means of the CMIP3 simulations as projections of the global climate change over the 21st century under several SRES emission scenarios. To assess the possible range of change for Australia based on the CMIP3 ensemble, we can follow Whetton etal. (2005) and use the 'pattern scaling' approach, which separates the uncertainty in the global mean warming from that in the local change per degree of warming. This study presents several ways of representing these two factors as probability density functions (PDFs). The beta distribution, a smooth, bounded, function allowing skewness, is found to provide a useful representation of the range of CMIP3 results. A weighting of models based on their skill in simulating seasonal means in the present climate over Australia is included. Dessai ef al. (2005) and others have used Monte-Carlo sampling to recombine such global warming and scaled change factors into values of net change. Here, we use a direct integration of the product across the joint probability space defined by the two PDFs. The result is a cumulative distribution function (CDF) for change, for each variable, location, and season. The median of this distribution provides a best estimate of change, while the 10th and 90th percentiles represent a likely range. The probability of exceeding a specified threshold can also be extracted from the CDF. The presentation focuses on changes in Australian temperature and precipitation at 2070 under the A1B scenario. However, the assumption of linearity behind pattern scaling allows results for different scenarios and times to be simply obtained. In the case of precipitation, which must remain non-negative, a simple modification of the calculations (based on decreases being exponential with warming) is used to avoid unrealistic results. These approaches are currently being used for the new CSIRO/ Bureau of Meteorology climate projections

  10. Probability Density Components Analysis: A New Approach to Treatment and Classification of SAR Images

    Directory of Open Access Journals (Sweden)

    Osmar Abílio de Carvalho Júnior

    2014-04-01

    Full Text Available Speckle noise (salt and pepper is inherent to synthetic aperture radar (SAR, which causes a usual noise-like granular aspect and complicates the image classification. In SAR image analysis, the spatial information might be a particular benefit for denoising and mapping classes characterized by a statistical distribution of the pixel intensities from a complex and heterogeneous spectral response. This paper proposes the Probability Density Components Analysis (PDCA, a new alternative that combines filtering and frequency histogram to improve the classification procedure for the single-channel synthetic aperture radar (SAR images. This method was tested on L-band SAR data from the Advanced Land Observation System (ALOS Phased-Array Synthetic-Aperture Radar (PALSAR sensor. The study area is localized in the Brazilian Amazon rainforest, northern Rondônia State (municipality of Candeias do Jamari, containing forest and land use patterns. The proposed algorithm uses a moving window over the image, estimating the probability density curve in different image components. Therefore, a single input image generates an output with multi-components. Initially the multi-components should be treated by noise-reduction methods, such as maximum noise fraction (MNF or noise-adjusted principal components (NAPCs. Both methods enable reducing noise as well as the ordering of multi-component data in terms of the image quality. In this paper, the NAPC applied to multi-components provided large reductions in the noise levels, and the color composites considering the first NAPC enhance the classification of different surface features. In the spectral classification, the Spectral Correlation Mapper and Minimum Distance were used. The results obtained presented as similar to the visual interpretation of optical images from TM-Landsat and Google Maps.

  11. Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.

    2011-05-15

    Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.

  12. Probability Density Function Characterization for Aggregated Large-Scale Wind Power Based on Weibull Mixtures

    Directory of Open Access Journals (Sweden)

    Emilio Gómez-Lázaro

    2016-02-01

    Full Text Available The Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF for aggregated wind power generation. PDFs of wind power data are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC and the Bayesian information criterion (BIC. Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.

  13. Development and evaluation of probability density functions for a set of human exposure factors

    International Nuclear Information System (INIS)

    The purpose of this report is to describe efforts carried out during 1998 and 1999 at the Lawrence Berkeley National Laboratory to assist the U.S. EPA in developing and ranking the robustness of a set of default probability distributions for exposure assessment factors. Among the current needs of the exposure-assessment community is the need to provide data for linking exposure, dose, and health information in ways that improve environmental surveillance, improve predictive models, and enhance risk assessment and risk management (NAS, 1994). The U.S. Environmental Protection Agency (EPA) Office of Emergency and Remedial Response (OERR) plays a lead role in developing national guidance and planning future activities that support the EPA Superfund Program. OERR is in the process of updating its 1989 Risk Assessment Guidance for Superfund (RAGS) as part of the EPA Superfund reform activities. Volume III of RAGS, when completed in 1999 will provide guidance for conducting probabilistic risk assessments. This revised document will contain technical information including probability density functions (PDFs) and methods used to develop and evaluate these PDFs. The PDFs provided in this EPA document are limited to those relating to exposure factors

  14. Development and evaluation of probability density functions for a set of human exposure factors

    Energy Technology Data Exchange (ETDEWEB)

    Maddalena, R.L.; McKone, T.E.; Bodnar, A.; Jacobson, J.

    1999-06-01

    The purpose of this report is to describe efforts carried out during 1998 and 1999 at the Lawrence Berkeley National Laboratory to assist the U.S. EPA in developing and ranking the robustness of a set of default probability distributions for exposure assessment factors. Among the current needs of the exposure-assessment community is the need to provide data for linking exposure, dose, and health information in ways that improve environmental surveillance, improve predictive models, and enhance risk assessment and risk management (NAS, 1994). The U.S. Environmental Protection Agency (EPA) Office of Emergency and Remedial Response (OERR) plays a lead role in developing national guidance and planning future activities that support the EPA Superfund Program. OERR is in the process of updating its 1989 Risk Assessment Guidance for Superfund (RAGS) as part of the EPA Superfund reform activities. Volume III of RAGS, when completed in 1999 will provide guidance for conducting probabilistic risk assessments. This revised document will contain technical information including probability density functions (PDFs) and methods used to develop and evaluate these PDFs. The PDFs provided in this EPA document are limited to those relating to exposure factors.

  15. The probability density function of the multiplication factor due to small, random displacements of fissile spheres

    International Nuclear Information System (INIS)

    An analytical expression is obtained for the probability density function of the multiplication factor of an array of spheres when each sphere is displaced in a random fashion from its initial position. Two cases are considered: (1) spheres in an infinite background medium in which the total cross section in spheres and medium is the same, and (2) spheres in a void. In all cases we use integral transport theory and cast the problem into one involving average fluxes in the spheres which interact via collision probabilities. The statistical aspects of the problem are treated by first order perturbation theory and the general conclusion is that, when the number of spheres exceeds about 5, the reduced multiplication factor ((ξ (k-k0))/(k0)), where k0 is the unperturbed value, is given accurately by the Gaussian distribution P (ξ)= (1)/(SQRT(2 π) σ DT) exp-((ξ 2)/(2 σ2 DT2)).)) The partial standard deviation σ -2δ / SQRT (3), δ being the maximum movement of the sphere from its equilibrium position. DT is a function of the system properties and geometry. Some numerical results are given to illustrate the magnitude of the effects and also the accuracy of diffusion theory for this type of problem is assessed. The overall accuracy of the perturbation method is assessed by an essentially exact result obtained using simulation, thereby enabling the range of perturbation theory to be investigated

  16. Models for the probability densities of the turbulent plasma flux in magnetized plasmas

    Science.gov (United States)

    Bergsaker, A. S.; Fredriksen, Å; Pécseli, H. L.; Trulsen, J. K.

    2015-10-01

    Observations of turbulent transport in magnetized plasmas indicate that plasma losses can be due to coherent structures or bursts of plasma rather than a classical random walk or diffusion process. A model for synthetic data based on coherent plasma flux events is proposed, where all basic properties can be obtained analytically in terms of a few control parameters. One basic parameter in the present case is the density of burst events in a long time-record, together with parameters in a model of the individual pulse shapes and the statistical distribution of these parameters. The model and its extensions give the probability density of the plasma flux. An interesting property of the model is a prediction of a near-parabolic relation between skewness and kurtosis of the statistical flux distribution for a wide range of parameters. The model is generalized by allowing for an additive random noise component. When this noise dominates the signal we can find a transition to standard results for Gaussian random noise. Applications of the model are illustrated by data from the toroidal Blaamann plasma.

  17. Entrainment Rate in Shallow Cumuli: Dependence on Entrained Dry Air Sources and Probability Density Functions

    Science.gov (United States)

    Lu, C.; Liu, Y.; Niu, S.; Vogelmann, A. M.

    2012-12-01

    In situ aircraft cumulus observations from the RACORO field campaign are used to estimate entrainment rate for individual clouds using a recently developed mixing fraction approach. The entrainment rate is computed based on the observed state of the cloud core and the state of the air that is laterally mixed into the cloud at its edge. The computed entrainment rate decreases when the air is entrained from increasing distance from the cloud core edge; this is because the air farther away from cloud edge is drier than the neighboring air that is within the humid shells around cumulus clouds. Probability density functions of entrainment rate are well fitted by lognormal distributions at different heights above cloud base for different dry air sources (i.e., different source distances from the cloud core edge). Such lognormal distribution functions are appropriate for inclusion into future entrainment rate parameterization in large scale models. To the authors' knowledge, this is the first time that probability density functions of entrainment rate have been obtained in shallow cumulus clouds based on in situ observations. The reason for the wide spread of entrainment rate is that the observed clouds are affected by entrainment mixing processes to different extents, which is verified by the relationships between the entrainment rate and cloud microphysics/dynamics. The entrainment rate is negatively correlated with liquid water content and cloud droplet number concentration due to the dilution and evaporation in entrainment mixing processes. The entrainment rate is positively correlated with relative dispersion (i.e., ratio of standard deviation to mean value) of liquid water content and droplet size distributions, consistent with the theoretical expectation that entrainment mixing processes are responsible for microphysics fluctuations and spectral broadening. The entrainment rate is negatively correlated with vertical velocity and dissipation rate because entrainment

  18. Representation of layer-counted proxy records as probability densities on error-free time axes

    Science.gov (United States)

    Boers, Niklas; Goswami, Bedartha; Ghil, Michael

    2016-04-01

    Time series derived from paleoclimatic proxy records exhibit substantial dating uncertainties in addition to the measurement errors of the proxy values. For radiometrically dated proxy archives, Goswami et al. [1] have recently introduced a framework rooted in Bayesian statistics that successfully propagates the dating uncertainties from the time axis to the proxy axis. The resulting proxy record consists of a sequence of probability densities over the proxy values, conditioned on prescribed age values. One of the major benefits of this approach is that the proxy record is represented on an accurate, error-free time axis. Such unambiguous dating is crucial, for instance, in comparing different proxy records. This approach, however, is not directly applicable to proxy records with layer-counted chronologies, as for example ice cores, which are typically dated by counting quasi-annually deposited ice layers. Hence the nature of the chronological uncertainty in such records is fundamentally different from that in radiometrically dated ones. Here, we introduce a modification of the Goswami et al. [1] approach that is specifically designed for layer-counted proxy records, instead of radiometrically dated ones. We apply our method to isotope ratios and dust concentrations in the NGRIP core, using a published 60,000-year chronology [2]. It is shown that the further one goes into the past, the more the layer-counting errors accumulate and lead to growing uncertainties in the probability density sequence for the proxy values that results from the proposed approach. For the older parts of the record, these uncertainties affect more and more a statistically sound estimation of proxy values. This difficulty implies that great care has to be exercised when comparing and in particular aligning specific events among different layer-counted proxy records. On the other hand, when attempting to derive stochastic dynamical models from the proxy records, one is only interested in the

  19. Defects and Electron Densities in TiAl-based Alloys Containing Mn and Cu Studied by Positron Annihilation

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The defects and electron densities in Ti50Al50, Ti50Al48Mn2 and Ti50Al48Cu2 alloys have been studied by positron lifetime measurements. The results show that the free electron density in the bulk of binary TiAl alloy is lower than that of pure Ti or Al metal. The open volume of defects on the grain boundaries of binary TiAl alloy is larger than that of a monovacancy of Al metal. The additions of Mn and Cu into Ti-rich TiAl alloy will increase the free electron densities in the bulk and the grain boundary simultaneously, since one Mn atom or Cu atom which occupies the Al atom site provides more free electrons participating metallic bonds than those provided by an Al atom. It is also found the free electron density in the grain boundary of Ti50Al48Cu2 is higher than that of Ti50Al48Mn2 alloy, while the free electron density in the bulk of Ti50Al48Cu2 is lower than that of Ti50Al48Mn2 alloy. The behaviors of Mn and Cu atoms in TiAl alloy have been discussed.

  20. Probability density function and estimation for error of digitized map coordinates in GIS

    Institute of Scientific and Technical Information of China (English)

    童小华; 刘大杰

    2004-01-01

    Traditionally, it is widely accepted that measurement error usually obeys the normal distribution. However, in this paper a new idea is proposed that the error in digitized data which is a major derived data source in GIS does not obey the normal distribution but the p-norm distribution with a determinate parameter. Assuming that the error is random and has the same statistical properties, the probability density function of the normal distribution,Laplace distribution and p-norm distribution are derived based on the arithmetic mean axiom, median axiom and pmedian axiom, which means that the normal distribution is only one of these distributions but not the least one.Based on this idea, distribution fitness tests such as Skewness and Kurtosis coefficient test, Pearson chi-square x2 test and Kolmogorov test for digitized data are conducted. The results show that the error in map digitization obeys the p-norm distribution whose parameter is close to 1.60. A least p-norm estimation and the least square estimation of digitized data are further analyzed, showing that the least p-norm adiustment is better than the least square adjustment for digitized data processing in GIS.

  1. Triggering word learning in children with Language Impairment: the effect of phonotactic probability and neighbourhood density.

    Science.gov (United States)

    McKean, Cristina; Letts, Carolyn; Howard, David

    2014-11-01

    The effect of phonotactic probability (PP) and neighbourhood density (ND) on triggering word learning was examined in children with Language Impairment (3;04-6;09) and compared to Typically Developing children. Nonwords, varying PP and ND orthogonally, were presented in a story context and their learning tested using a referent identification task. Group comparisons with receptive vocabulary as a covariate found no group differences in overall scores or in the influence of PP or ND. Therefore, there was no evidence of atypical lexical or phonological processing. 'Convergent' PP/ND (High PP/High ND; Low PP/Low ND) was optimal for word learning in both groups. This bias interacted with vocabulary knowledge. 'Divergent' PP/ND word scores (High PP/Low ND; Low PP/High ND) were positively correlated with vocabulary so the 'divergence disadvantage' reduced as vocabulary knowledge grew; an interaction hypothesized to represent developmental changes in lexical-phonological processing linked to the emergence of phonological representations. PMID:24191951

  2. Representation of Probability Density Functions from Orbit Determination using the Particle Filter

    Science.gov (United States)

    Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell

    2012-01-01

    Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.

  3. Probability density function modeling of dispersed two-phase turbulent flows

    Science.gov (United States)

    Pozorski, Jacek; Minier, Jean-Pierre

    1999-01-01

    This paper discusses stochastic approaches to dispersed two-phase flow modeling. A general probability density function (PDF) formalism is used since it provides a common and convenient framework to analyze the relations between different formulations. For two-phase flow PDF modeling, a key issue is the choice of the state variables. In a first formulation, they include only the position and velocity of the dispersed particles. The kinetic equation satisfied by the corresponding PDF is derived in a different way using tools from the theory of stochastic differential equations. The final expression is identical to an earlier proposal by Reeks [Phys. Fluids A 4, 1290 (1992)] obtained with a different method. As the kinetic equation involves the instantaneous fluid velocity sampled along the particle trajectories, it is unclosed. Another, more general, formulation is then presented, where the fluid velocity ``seen'' by the solid particles along their paths is added to the state variables. A diffusion model, where trajectories of the process follow a Langevin type of equation, is proposed for the time evolution equation of the fluid velocity ``seen'' and is discussed. A general PDF formulation that includes both fluid and particle variables, and from which both fluid and particle mean equations can be obtained, is then put forward.

  4. Calculations of subsonic and supersonic turbulent reacting mixing layers using probability density function methods

    Science.gov (United States)

    Delarue, B. J.; Pope, S. B.

    1998-02-01

    A particle method applying the probability density function (PDF) approach to turbulent compressible reacting flows is presented. The method is applied to low and high Mach number reacting plane mixing layers. Good agreement is obtained between the model calculations and the available experimental data. The PDF equation is solved using a Lagrangian Monte Carlo method. To represent the effects of compressibility on the flow, the velocity PDF formulation is extended to include thermodynamic variables such as the pressure and the internal energy. Full closure of the joint PDF transport equation is made possible in this way without coupling to a finite-difference-type solver. The stochastic differential equations (SDE) that model the evolution of Lagrangian particle properties are based on existing models for the effects of compressibility on turbulence. The chemistry studied is the fast hydrogen-fluorine reaction. For the low Mach number runs, low heat release calculations are performed with equivalence ratios different from one. Heat release is then increased to study the effect of chemical reaction on the mixing layer growth rate. The subsonic results are compared with experimental data, and good overall agreement is obtained. The calculations are then performed at a higher Mach number, and the results are compared with the subsonic results. Our purpose in this paper is not to assess the performances of existing models for compressible or reacting flows. It is rather to present a new approach extending the domain of applicability of PDF methods to high-speed combustion.

  5. Power probability density function control and performance assessment of a nuclear research reactor

    International Nuclear Information System (INIS)

    Highlights: • In this paper, the performance assessment of static PDF control system is discussed. • The reactor PDF model is set up based on the B-spline functions. • Acquaints of Nu, and Th-h. equations solve concurrently by reformed Hansen’s method. • A principle of performance assessment is put forward for the PDF of the NR control. - Abstract: One of the main issues in controlling a system is to keep track of the conditions of the system function. The performance condition of the system should be inspected continuously, to keep the system in reliable working condition. In this study, the nuclear reactor is considered as a complicated system and a principle of performance assessment is used for analyzing the performance of the power probability density function (PDF) of the nuclear research reactor control. First, the model of the power PDF is set up, then the controller is designed to make the power PDF for tracing the given shape, that make the reactor to be a closed-loop system. The operating data of the closed-loop reactor are used to assess the control performance with the performance assessment criteria. The modeling, controller design and the performance assessment of the power PDF are all applied to the control of Tehran Research Reactor (TRR) power in a nuclear process. In this paper, the performance assessment of the static PDF control system is discussed, the efficacy and efficiency of the proposed method are investigated, and finally its reliability is proven

  6. Probability density functions of the average and difference intensities of Friedel opposites.

    Science.gov (United States)

    Shmueli, U; Flack, H D

    2010-11-01

    Trigonometric series for the average (A) and difference (D) intensities of Friedel opposites were carefully rederived and were normalized to minimize their dependence on sin(theta)/lambda. Probability density functions (hereafter p.d.f.s) of these series were then derived by the Fourier method [Shmueli, Weiss, Kiefer & Wilson (1984). Acta Cryst. A40, 651-660] and their expressions, which admit any chemical composition of the unit-cell contents, were obtained for the space group P1. Histograms of A and D were then calculated for an assumed random-structure model and for 3135 Friedel pairs of a published solved crystal structure, and were compared with the p.d.f.s after the latter were scaled up to the histograms. Good agreement was obtained for the random-structure model and a qualitative one for the published solved structure. The results indicate that the residual discrepancy is mainly due to the presumed statistical independence of the p.d.f.'s characteristic function on the contributions of the interatomic vectors. PMID:20962376

  7. Micro-object motion tracking based on the probability hypothesis density particle tracker.

    Science.gov (United States)

    Shi, Chunmei; Zhao, Lingling; Wang, Junjie; Zhang, Chiping; Su, Xiaohong; Ma, Peijun

    2016-04-01

    Tracking micro-objects in the noisy microscopy image sequences is important for the analysis of dynamic processes in biological objects. In this paper, an automated tracking framework is proposed to extract the trajectories of micro-objects. This framework uses a probability hypothesis density particle filtering (PF-PHD) tracker to implement a recursive state estimation and trajectories association. In order to increase the efficiency of this approach, an elliptical target model is presented to describe the micro-objects using shape parameters instead of point-like targets which may cause inaccurate tracking. A novel likelihood function, not only covering the spatiotemporal distance but also dealing with geometric shape function based on the Mahalanobis norm, is proposed to improve the accuracy of particle weight in the update process of the PF-PHD tracker. Using this framework, a larger number of tracks are obtained. The experiments are performed on simulated data of microtubule movements and real mouse stem cells. We compare the PF-PHD tracker with the nearest neighbor method and the multiple hypothesis tracking method. Our PF-PHD tracker can simultaneously track hundreds of micro-objects in the microscopy image sequence. PMID:26084407

  8. Homogeneous clusters over India using probability density function of daily rainfall

    Science.gov (United States)

    Kulkarni, Ashwini

    2016-04-01

    The Indian landmass has been divided into homogeneous clusters by applying the cluster analysis to the probability density function of a century-long time series of daily summer monsoon (June through September) rainfall at 357 grids over India, each of approximately 100 km × 100 km. The analysis gives five clusters over Indian landmass; only cluster 5 happened to be the contiguous region and all other clusters are dispersed away which confirms the erratic behavior of daily rainfall over India. The area averaged seasonal rainfall over cluster 5 has a very strong relationship with Indian summer monsoon rainfall; also, the rainfall variability over this region is modulated by the most important mode of climate system, i.e., El Nino Southern Oscillation (ENSO). This cluster could be considered as the representative of the entire Indian landmass to examine monsoon variability. The two-sample Kolmogorov-Smirnov test supports that the cumulative distribution functions of daily rainfall over cluster 5 and India as a whole do not differ significantly. The clustering algorithm is also applied to two time epochs 1901-1975 and 1976-2010 to examine the possible changes in clusters in a recent warming period. The clusters are drastically different in two time periods. They are more dispersed in recent period implying the more erroneous distribution of daily rainfall in recent period.

  9. Elliptical Anisotropy Statistics of Two-Dimensional Differentiable Gaussian Random Fields: Joint Probability Density Function and Confidence Regions

    CERN Document Server

    Petrakis, Manolis P

    2012-01-01

    Two-dimensional data often have autocovariance functions with elliptical equipotential contours, a property known as statistical anisotropy. The anisotropy parameters include the tilt of the ellipse (orientation angle) $\\theta$ with respect to the coordinate system and the ratio $R$ of the principal correlation lengths. Sample estimates of anisotropy parameters are needed for defining suitable spatial models and for interpolation of incomplete data. The sampling joint probability density function characterizes the distribution of anisotropy statistics $(\\hat{R}, \\hat{\\theta})$. By means of analytical calculations, we derive an explicit expression for the joint probability density function, which is valid for Gaussian, stationary and differentiable random fields. Based on it, we derive an approximation of the joint probability density function that is independent of the autocovariance function and provides conservative confidence regions for the sample-based estimates $(\\hat{R},\\hat{\\theta})$. We also formulat...

  10. First-passage-time densities and avoiding probabilities for birth-and-death processes with symmetric sample paths

    OpenAIRE

    Di Crescenzo, Antonio

    1998-01-01

    For truncated birth-and-death processes with two absorbing or two reflecting boundaries, necessary and sufficient conditions on the transition rates are given such that the transition probabilities satisfy a suitable spatial symmetry relation. This allows one to obtain simple expressions for first-passage-time densities and for certain avoiding transition probabilities. An application to an M/M/1 queueing system with two finite sequential queueing rooms of equal sizes is finall...

  11. The role of presumed probability density functions in the simulation of nonpremixed turbulent combustion

    Science.gov (United States)

    Coclite, A.; Pascazio, G.; De Palma, P.; Cutrone, L.

    2016-07-01

    Flamelet-Progress-Variable (FPV) combustion models allow the evaluation of all thermochemical quantities in a reacting flow by computing only the mixture fraction Z and a progress variable C. When using such a method to predict turbulent combustion in conjunction with a turbulence model, a probability density function (PDF) is required to evaluate statistical averages (e. g., Favre averages) of chemical quantities. The choice of the PDF is a compromise between computational costs and accuracy level. The aim of this paper is to investigate the influence of the PDF choice and its modeling aspects to predict turbulent combustion. Three different models are considered: the standard one, based on the choice of a β-distribution for Z and a Dirac-distribution for C; a model employing a β-distribution for both Z and C; and the third model obtained using a β-distribution for Z and the statistically most likely distribution (SMLD) for C. The standard model, although widely used, does not take into account the interaction between turbulence and chemical kinetics as well as the dependence of the progress variable not only on its mean but also on its variance. The SMLD approach establishes a systematic framework to incorporate informations from an arbitrary number of moments, thus providing an improvement over conventionally employed presumed PDF closure models. The rational behind the choice of the three PDFs is described in some details and the prediction capability of the corresponding models is tested vs. well-known test cases, namely, the Sandia flames, and H2-air supersonic combustion.

  12. Accuracy of the non-relativistic approximation to relativistic probability densities for a low-speed weak-gravity system

    Science.gov (United States)

    Liang, Shiuan-Ni; Lan, Boon Leong

    2015-11-01

    The Newtonian and general-relativistic position and velocity probability densities, which are calculated from the same initial Gaussian ensemble of trajectories using the same system parameters, are compared for a low-speed weak-gravity bouncing ball system. The Newtonian approximation to the general-relativistic probability densities does not always break down rapidly if the trajectories in the ensembles are chaotic -- the rapid breakdown occurs only if the initial position and velocity standard deviations are sufficiently small. This result is in contrast to the previously studied single-trajectory case where the Newtonian approximation to a general-relativistic trajectory will always break down rapidly if the two trajectories are chaotic. Similar rapid breakdown of the Newtonian approximation to the general-relativistic probability densities should also occur for other low-speed weak-gravity chaotic systems since it is due to sensitivity to the small difference between the two dynamical theories at low speed and weak gravity. For the bouncing ball system, the breakdown of the Newtonian approximation is transient because the Newtonian and general-relativistic probability densities eventually converge to invariant densities which are close in agreement.

  13. Kinetic and dynamic probability-density-function descriptions of disperse turbulent two-phase flows.

    Science.gov (United States)

    Minier, Jean-Pierre; Profeta, Christophe

    2015-11-01

    This article analyzes the status of two classical one-particle probability density function (PDF) descriptions of the dynamics of discrete particles dispersed in turbulent flows. The first PDF formulation considers only the process made up by particle position and velocity Z(p)=(x(p),U(p)) and is represented by its PDF p(t; y(p),V(p)) which is the solution of a kinetic PDF equation obtained through a flux closure based on the Furutsu-Novikov theorem. The second PDF formulation includes fluid variables into the particle state vector, for example, the fluid velocity seen by particles Z(p)=(x(p),U(p),U(s)), and, consequently, handles an extended PDF p(t; y(p),V(p),V(s)) which is the solution of a dynamic PDF equation. For high-Reynolds-number fluid flows, a typical formulation of the latter category relies on a Langevin model for the trajectories of the fluid seen or, conversely, on a Fokker-Planck equation for the extended PDF. In the present work, a new derivation of the kinetic PDF equation is worked out and new physical expressions of the dispersion tensors entering the kinetic PDF equation are obtained by starting from the extended PDF and integrating over the fluid seen. This demonstrates that, under the same assumption of a Gaussian colored noise and irrespective of the specific stochastic model chosen for the fluid seen, the kinetic PDF description is the marginal of a dynamic PDF one. However, a detailed analysis reveals that kinetic PDF models of particle dynamics in turbulent flows described by statistical correlations constitute incomplete stand-alone PDF descriptions and, moreover, that present kinetic-PDF equations are mathematically ill posed. This is shown to be the consequence of the non-Markovian characteristic of the stochastic process retained to describe the system and the use of an external colored noise. Furthermore, developments bring out that well-posed PDF descriptions are essentially due to a proper choice of the variables selected to

  14. Kinetic and dynamic probability-density-function descriptions of disperse turbulent two-phase flows

    Science.gov (United States)

    Minier, Jean-Pierre; Profeta, Christophe

    2015-11-01

    This article analyzes the status of two classical one-particle probability density function (PDF) descriptions of the dynamics of discrete particles dispersed in turbulent flows. The first PDF formulation considers only the process made up by particle position and velocity Zp=(xp,Up) and is represented by its PDF p (t ;yp,Vp) which is the solution of a kinetic PDF equation obtained through a flux closure based on the Furutsu-Novikov theorem. The second PDF formulation includes fluid variables into the particle state vector, for example, the fluid velocity seen by particles Zp=(xp,Up,Us) , and, consequently, handles an extended PDF p (t ;yp,Vp,Vs) which is the solution of a dynamic PDF equation. For high-Reynolds-number fluid flows, a typical formulation of the latter category relies on a Langevin model for the trajectories of the fluid seen or, conversely, on a Fokker-Planck equation for the extended PDF. In the present work, a new derivation of the kinetic PDF equation is worked out and new physical expressions of the dispersion tensors entering the kinetic PDF equation are obtained by starting from the extended PDF and integrating over the fluid seen. This demonstrates that, under the same assumption of a Gaussian colored noise and irrespective of the specific stochastic model chosen for the fluid seen, the kinetic PDF description is the marginal of a dynamic PDF one. However, a detailed analysis reveals that kinetic PDF models of particle dynamics in turbulent flows described by statistical correlations constitute incomplete stand-alone PDF descriptions and, moreover, that present kinetic-PDF equations are mathematically ill posed. This is shown to be the consequence of the non-Markovian characteristic of the stochastic process retained to describe the system and the use of an external colored noise. Furthermore, developments bring out that well-posed PDF descriptions are essentially due to a proper choice of the variables selected to describe physical systems

  15. Word Recognition and Nonword Repetition in Children with Language Disorders: The Effects of Neighborhood Density, Lexical Frequency, and Phonotactic Probability

    Science.gov (United States)

    Rispens, Judith; Baker, Anne; Duinmeijer, Iris

    2015-01-01

    Purpose: The effects of neighborhood density (ND) and lexical frequency on word recognition and the effects of phonotactic probability (PP) on nonword repetition (NWR) were examined to gain insight into processing at the lexical and sublexical levels in typically developing (TD) children and children with developmental language problems. Method:…

  16. Annihilator-semigroup rings

    OpenAIRE

    Anderson, D D; Victor Camillo

    2003-01-01

    Let $ R $ be a commutative ring with 1. We define $ R $ to be an annihilator-semigroup ring if $ R $ has an annihilator-Semigroup $ S $, that is, $ (S, \\cdot) $ is a multiplicative subsemigroup of $ (R, \\cdot) $ with the property that for each $ r \\in R $ there exists a unique $ s \\in S $ with $ 0 : r = 0 : s $. In this paper we investigate annihilator-semigroups and annihilator-semigroup rings.

  17. A Galerkin-based formulation of the probability density evolution method for general stochastic finite element systems

    Science.gov (United States)

    Papadopoulos, Vissarion; Kalogeris, Ioannis

    2016-05-01

    The present paper proposes a Galerkin finite element projection scheme for the solution of the partial differential equations (pde's) involved in the probability density evolution method, for the linear and nonlinear static analysis of stochastic systems. According to the principle of preservation of probability, the probability density evolution of a stochastic system is expressed by its corresponding Fokker-Planck (FP) stochastic partial differential equation. Direct integration of the FP equation is feasible only for simple systems with a small number of degrees of freedom, due to analytical and/or numerical intractability. However, rewriting the FP equation conditioned to the random event description, a generalized density evolution equation (GDEE) can be obtained, which can be reduced to a one dimensional pde. Two Galerkin finite element method schemes are proposed for the numerical solution of the resulting pde's, namely a time-marching discontinuous Galerkin scheme and the StreamlineUpwind/Petrov Galerkin (SUPG) scheme. In addition, a reformulation of the classical GDEE is proposed, which implements the principle of probability preservation in space instead of time, making this approach suitable for the stochastic analysis of finite element systems. The advantages of the FE Galerkin methods and in particular the SUPG over finite difference schemes, like the modified Lax-Wendroff, which is the most frequently used method for the solution of the GDEE, are illustrated with numerical examples and explored further.

  18. Criticality of the net-baryon number probability distribution at finite density

    OpenAIRE

    Kenji Morita; Bengt Friman; Krzysztof Redlich

    2014-01-01

    We compute the probability distribution $P(N)$ of the net-baryon number at finite temperature and quark-chemical potential, $\\mu$, at a physical value of the pion mass in the quark-meson model within the functional renormalization group scheme. For $\\mu/T

  19. Criticality of the net-baryon number probability distribution at finite density

    OpenAIRE

    Morita, Kenji; Friman, Bengt; Redlich, Krzysztof

    2015-01-01

    We compute the probability distribution P(N) of the net-baryon number at finite temperature and quark-chemical potential, μ , at a physical value of the pion mass in the quark-meson model within the functional renormalization group scheme. For μ/T

  20. Implied probability density functions: Estimation using hypergeometric, spline and lognormal functions

    OpenAIRE

    Santos, André Duarte dos

    2011-01-01

    This thesis examines the stability and accuracy of three different methods to estimate Risk-Neutral Density functions (RNDs) using European options. These methods are the Double-Lognormal Function (DLN), the Smoothed Implied Volatility Smile (SML) and the Density Functional Based on Confluent Hypergeometric function (DFCH). These methodologies were used to obtain the RNDs from the option prices with the underlying USDBRL (price of US dollars in terms of Brazilian reals) for different maturiti...

  1. Protein distance constraints predicted by neural networks and probability density functions

    DEFF Research Database (Denmark)

    Lund, Ole; Frimand, Kenneth; Gorodkin, Jan; Bohr, Henrik; Bohr, Jakob; Hansen, Jan; Brunak, Søren

    1997-01-01

    We predict interatomic C-α distances by two independent data driven methods. The first method uses statistically derived probability distributions of the pairwise distance between two amino acids, whilst the latter method consists of a neural network prediction approach equipped with windows taki...... profiles. A threading method based on the predicted distances is presented. A homepage with software, predictions and data related to this paper is available at http://www.cbs.dtu.dk/services/CPHmodels/...

  2. Simulation Of Wave Function And Probability Density Of Modified Poschl Teller Potential Derived Using Supersymmetric Quantum Mechanics

    International Nuclear Information System (INIS)

    SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.

  3. Simulation Of Wave Function And Probability Density Of Modified Poschl Teller Potential Derived Using Supersymmetric Quantum Mechanics

    Science.gov (United States)

    Angraini, Lily Maysari; Suparmi, Variani, Viska Inda

    2010-12-01

    SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.

  4. Comparison of Anger camera and BGO mosaic position-sensitive detectors for 'Super ACAR'. Precision electron momentum densities via angular correlation of annihilation radiation

    International Nuclear Information System (INIS)

    We discuss the relative merits of Anger cameras and Bismuth Germanate mosaic counters for measuring the angular correlation of positron annihilation radiation at a facility such as the proposed Positron Factory at Takasaki. The two possibilities appear equally cost effective at this time. (author)

  5. Comparison of Anger camera and BGO mosaic position-sensitive detectors for `Super ACAR`. Precision electron momentum densities via angular correlation of annihilation radiation

    Energy Technology Data Exchange (ETDEWEB)

    Mills, A.P. Jr. [Bell Labs. Murray Hill, NJ (United States); West, R.N.; Hyodo, Toshio

    1997-03-01

    We discuss the relative merits of Anger cameras and Bismuth Germanate mosaic counters for measuring the angular correlation of positron annihilation radiation at a facility such as the proposed Positron Factory at Takasaki. The two possibilities appear equally cost effective at this time. (author)

  6. Model Assembly for Estimating Cell Surviving Fraction for Both Targeted and Nontargeted Effects Based on Microdosimetric Probability Densities

    OpenAIRE

    Sato, Tatsuhiko; Hamada, Nobuyuki

    2014-01-01

    We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric...

  7. Criticality of the net-baryon number probability distribution at finite density

    CERN Document Server

    Morita, Kenji; Redlich, Krzysztof

    2014-01-01

    We compute the probability distribution $P(N)$ of the net-baryon number at finite temperature and quark-chemical potential, $\\mu$, at a physical value of the pion mass in the quark-meson model within the functional renormalization group scheme. For $\\mu/T<1$, the model exhibits the chiral crossover transition which belongs to the universality class of the $O(4)$ spin system in three dimensions. We explore the influence of the chiral crossover transition on the properties of the net baryon number probability distribution, $P(N)$. By considering ratios of $P(N)$ to the Skellam function, with the same mean and variance, we unravel the characteristic features of the distribution that are related to $O(4)$ criticality at the chiral crossover transition. We explore the corresponding ratios for data obtained at RHIC by the STAR Collaboration and discuss their implications. We also examine $O(4)$ criticality in the context of binomial and negative-binomial distributions for the net proton number.

  8. Criticality of the net-baryon number probability distribution at finite density

    Directory of Open Access Journals (Sweden)

    Kenji Morita

    2015-02-01

    Full Text Available We compute the probability distribution P(N of the net-baryon number at finite temperature and quark-chemical potential, μ, at a physical value of the pion mass in the quark-meson model within the functional renormalization group scheme. For μ/T<1, the model exhibits the chiral crossover transition which belongs to the universality class of the O(4 spin system in three dimensions. We explore the influence of the chiral crossover transition on the properties of the net baryon number probability distribution, P(N. By considering ratios of P(N to the Skellam function, with the same mean and variance, we unravel the characteristic features of the distribution that are related to O(4 criticality at the chiral crossover transition. We explore the corresponding ratios for data obtained at RHIC by the STAR Collaboration and discuss their implications. We also examine O(4 criticality in the context of binomial and negative-binomial distributions for the net proton number.

  9. A summary of transition probabilities for atomic absorption lines formed in low-density clouds

    Science.gov (United States)

    Morton, D. C.; Smith, W. H.

    1973-01-01

    A table of wavelengths, statistical weights, and excitation energies is given for 944 atomic spectral lines in 221 multiplets whose lower energy levels lie below 0.275 eV. Oscillator strengths were adopted for 635 lines in 155 multiplets from the available experimental and theoretical determinations. Radiation damping constants also were derived for most of these lines. This table contains the lines most likely to be observed in absorption in interstellar clouds, circumstellar shells, and the clouds in the direction of quasars where neither the particle density nor the radiation density is high enough to populate the higher levels. All ions of all elements from hydrogen to zinc are included which have resonance lines longward of 912 A, although a number of weaker lines of neutrals and first ions have been omitted.

  10. Performance evaluation of probability density estimators for unsupervised information theoretical region merging

    OpenAIRE

    Calderero Patino, Felipe; Marqués Acosta, Fernando; Ortega, Antonio

    2009-01-01

    Information theoretical region merging techniques have been shown to provide a state-of-the-art unified solution for natural and texture image segmentation. Here, we study how the segmentation results can be further improved by a more accurate estimation of the statistical model characterizing the regions. Concretely, we explore four density estimators that can be used for pdf or joint pdf estimation. The first three are based on different quantization strategies: a general ...

  11. Probability density functions for the variable solar wind near the solar cycle minimum

    CERN Document Server

    Vörös,; Leitner, M; Narita, Y; Consolini, G; Kovács, P; Tóth, A; Lichtenberger, J

    2015-01-01

    Unconditional and conditional statistics is used for studying the histograms of magnetic field multi-scale fluctuations in the solar wind near the solar cycle minimum in 2008. The unconditional statistics involves the magnetic data during the whole year 2008. The conditional statistics involves the magnetic field time series splitted into concatenated subsets of data according to a threshold in dynamic pressure. The threshold separates fast stream leading edge compressional and trailing edge uncompressional fluctuations. The histograms obtained from these data sets are associated with both large-scale (B) and small-scale ({\\delta}B) magnetic fluctuations, the latter corresponding to time-delayed differences. It is shown here that, by keeping flexibility but avoiding the unnecessary redundancy in modeling, the histograms can be effectively described by a limited set of theoretical probability distribution functions (PDFs), such as the normal, log-normal, kappa and logkappa functions. In a statistical sense the...

  12. Photon correlations in positron annihilation

    OpenAIRE

    Gauthier, Isabelle; Hawton, Margaret

    2010-01-01

    The two-photon positron annihilation density matrix is found to separate into a diagonal center of energy factor implying maximally entangled momenta, and a relative factor describing decay. For unknown positron injection time, the distribution of the difference in photon arrival times is a double exponential at the para-Ps decay rate, consistent with experiment (V. D. Irby, Meas. Sci. Technol. 15, 1799 (2004)).

  13. Fusing probability density function into Dempster-Shafer theory of evidence for the evaluation of water treatment plant.

    Science.gov (United States)

    Chowdhury, Shakhawat

    2013-05-01

    The evaluation of the status of a municipal drinking water treatment plant (WTP) is important. The evaluation depends on several factors, including, human health risks from disinfection by-products (R), disinfection performance (D), and cost (C) of water production and distribution. The Dempster-Shafer theory (DST) of evidence can combine the individual status with respect to R, D, and C to generate a new indicator, from which the overall status of a WTP can be evaluated. In the DST, the ranges of different factors affecting the overall status are divided into several segments. The basic probability assignments (BPA) for each segment of these factors are provided by multiple experts, which are then combined to obtain the overall status. In assigning the BPA, the experts use their individual judgments, which can impart subjective biases in the overall evaluation. In this research, an approach has been introduced to avoid the assignment of subjective BPA. The factors contributing to the overall status were characterized using the probability density functions (PDF). The cumulative probabilities for different segments of these factors were determined from the cumulative density function, which were then assigned as the BPA for these factors. A case study is presented to demonstrate the application of PDF in DST to evaluate a WTP, leading to the selection of the required level of upgradation for the WTP. PMID:22941202

  14. On the thresholds, probability densities, and critical exponents of Bak-Sneppen-like models

    Science.gov (United States)

    Garcia, Guilherme J. M.; Dickman, Ronald

    2004-10-01

    We report a simple method to accurately determine the threshold and the exponent ν of the Bak-Sneppen (BS) model and also investigate the BS universality class. For the random-neighbor version of the BS model, we find the threshold x ∗=0.33332(3) , in agreement with the exact result x ∗= {1}/{3} given by mean-field theory. For the one-dimensional original model, we find x ∗=0.6672(2) in good agreement with the results reported in the literature; for the anisotropic BS model we obtain x ∗=0.7240(1) . We study the finite size effect x ∗(L)-x ∗(L→∞)∝L -ν, observed in a system with L sites, and find ν=1.00(1) for the random-neighbor version, ν=1.40(1) for the original model, and ν=1.58(1) for the anisotropic case. Finally, we discuss the effect of defining the extremal site as the one which minimizes a general function f( x), instead of simply f( x)= x as in the original updating rule. We emphasize that models with extremal dynamics have singular stationary probability distributions p( x). Our simulations indicate the existence of two symmetry-based universality classes.

  15. Non-stationary random vibration analysis of a 3D train-bridge system using the probability density evolution method

    Science.gov (United States)

    Yu, Zhi-wu; Mao, Jian-feng; Guo, Feng-qi; Guo, Wei

    2016-03-01

    Rail irregularity is one of the main sources causing train-bridge random vibration. A new random vibration theory for the coupled train-bridge systems is proposed in this paper. First, number theory method (NTM) with 2N-dimensional vectors for the stochastic harmonic function (SHF) of rail irregularity power spectrum density was adopted to determine the representative points of spatial frequencies and phases to generate the random rail irregularity samples, and the non-stationary rail irregularity samples were modulated with the slowly varying function. Second, the probability density evolution method (PDEM) was employed to calculate the random dynamic vibration of the three-dimensional (3D) train-bridge system by a program compiled on the MATLAB® software platform. Eventually, the Newmark-β integration method and double edge difference method of total variation diminishing (TVD) format were adopted to obtain the mean value curve, the standard deviation curve and the time-history probability density information of responses. A case study was presented in which the ICE-3 train travels on a three-span simply-supported high-speed railway bridge with excitation of random rail irregularity. The results showed that compared to the Monte Carlo simulation, the PDEM has higher computational efficiency for the same accuracy, i.e., an improvement by 1-2 orders of magnitude. Additionally, the influences of rail irregularity and train speed on the random vibration of the coupled train-bridge system were discussed.

  16. Charged-particle thermonuclear reaction rates: II. Tables and graphs of reaction rates and probability density functions

    International Nuclear Information System (INIS)

    Numerical values of charged-particle thermonuclear reaction rates for nuclei in the A=14 to 40 region are tabulated. The results are obtained using a method, based on Monte Carlo techniques, that has been described in the preceding paper of this issue (Paper I). We present a low rate, median rate and high rate which correspond to the 0.16, 0.50 and 0.84 quantiles, respectively, of the cumulative reaction rate distribution. The meaning of these quantities is in general different from the commonly reported, but statistically meaningless expressions, 'lower limit', 'nominal value' and 'upper limit' of the total reaction rate. In addition, we approximate the Monte Carlo probability density function of the total reaction rate by a lognormal distribution and tabulate the lognormal parameters μ and σ at each temperature. We also provide a quantitative measure (Anderson-Darling test statistic) for the reliability of the lognormal approximation. The user can implement the approximate lognormal reaction rate probability density functions directly in a stellar model code for studies of stellar energy generation and nucleosynthesis. For each reaction, the Monte Carlo reaction rate probability density functions, together with their lognormal approximations, are displayed graphically for selected temperatures in order to provide a visual impression. Our new reaction rates are appropriate for bare nuclei in the laboratory. The nuclear physics input used to derive our reaction rates is presented in the subsequent paper of this issue (Paper III). In the fourth paper of this issue (Paper IV) we compare our new reaction rates to previous results.

  17. WASP-17b: AN ULTRA-LOW DENSITY PLANET IN A PROBABLE RETROGRADE ORBIT

    International Nuclear Information System (INIS)

    We report the discovery of the transiting giant planet WASP-17b, the least-dense planet currently known. It is 1.6 Saturn masses, but 1.5-2 Jupiter radii, giving a density of 6%-14% that of Jupiter. WASP-17b is in a 3.7 day orbit around a sub-solar metallicity, V = 11.6, F6 star. Preliminary detection of the Rossiter-McLaughlin effect suggests that WASP-17b is in a retrograde orbit (λ ∼ -1500), indicative of a violent history involving planet-planet or star-planet scattering. WASP-17b's bloated radius could be due to tidal heating resulting from recent or ongoing tidal circularization of an eccentric orbit, such as the highly eccentric orbits that typically result from scattering interactions. It will thus be important to determine more precisely the current orbital eccentricity by further high-precision radial velocity measurements or by timing the secondary eclipse, both to reduce the uncertainty on the planet's radius and to test tidal-heating models. Owing to its low surface gravity, WASP-17b's atmosphere has the largest scale height of any known planet, making it a good target for transmission spectroscopy.

  18. WASP-17b: an ultra-low density planet in a probable retrograde orbit

    CERN Document Server

    Anderson, D R; Gillon, M; Triaud, A H M J; Smalley, B; Hebb, L; Cameron, A Collier; Maxted, P F L; Queloz, D; West, R G; Bentley, S J; Enoch, B; Horne, K; Lister, T A; Mayor, M; Parley, N R; Pepe, F; Pollacco, D; Ségransan, D; Udry, S; Wilson, D M

    2009-01-01

    We report the discovery of the transiting giant planet WASP-17b, the least-dense planet currently known. It is 1.6 Saturn masses but 1.5-2 Jupiter radii, giving a density of 6-14 per cent that of Jupiter. WASP-17b is in a 3.7-day orbit around a sub-solar metallicity, V = 11.6, F6 star. Preliminary detection of the Rossiter-McLaughlin effect suggests that WASP-17b is in a retrograde orbit (lambda ~ -150 deg), indicative of a violent history involving planet-planet or planet-star scattering. WASP-17b's bloated radius could be due to tidal heating resulting from recent or ongoing tidal circularisation of an eccentric orbit, such as the highly eccentric orbits that typically result from scattering interactions. It will thus be important to determine more precisely the current orbital eccentricity by further high-precision radial velocity measurements or by timing the secondary eclipse, both to reduce the uncertainty on the planet's radius and to test tidal-heating models. Owing to its low surface gravity, WASP-17...

  19. Finite-size scaling of the magnetization probability density for the critical Ising model in slab geometry

    Science.gov (United States)

    Lopes Cardozo, David; Holdsworth, Peter C. W.

    2016-04-01

    The magnetization probability density in d  =  2 and 3 dimensional Ising models in slab geometry of volume L\\paralleld-1× {{L}\\bot} is computed through Monte-Carlo simulation at the critical temperature and zero magnetic field. The finite-size scaling of this distribution and its dependence on the system aspect-ratio ρ =\\frac{{{L}\\bot}}{{{L}\\parallel}} and boundary conditions are discussed. In the limiting case ρ \\to 0 of a macroscopically large slab ({{L}\\parallel}\\gg {{L}\\bot} ) the distribution is found to scale as a Gaussian function for all tested system sizes and boundary conditions.

  20. Unit-Sphere Anisotropic Multiaxial Stochastic-Strength Model Probability Density Distribution for the Orientation of Critical Flaws

    Science.gov (United States)

    Nemeth, Noel

    2013-01-01

    Models that predict the failure probability of monolithic glass and ceramic components under multiaxial loading have been developed by authors such as Batdorf, Evans, and Matsuo. These "unit-sphere" failure models assume that the strength-controlling flaws are randomly oriented, noninteracting planar microcracks of specified geometry but of variable size. This report develops a formulation to describe the probability density distribution of the orientation of critical strength-controlling flaws that results from an applied load. This distribution is a function of the multiaxial stress state, the shear sensitivity of the flaws, the Weibull modulus, and the strength anisotropy. Examples are provided showing the predicted response on the unit sphere for various stress states for isotropic and transversely isotropic (anisotropic) materials--including the most probable orientation of critical flaws for offset uniaxial loads with strength anisotropy. The author anticipates that this information could be used to determine anisotropic stiffness degradation or anisotropic damage evolution for individual brittle (or quasi-brittle) composite material constituents within finite element or micromechanics-based software

  1. Annihilation of Quantum Magnetic Fluxes

    Science.gov (United States)

    Gonzalez, W. D.

    After introducing the concepts associated with the Aharonov and Bohm effect and with the existence of a quantum of magnetic flux (QMF), we briefly discuss the Ginzburg-Landau theory that explains its origin and fundamental consequences. Also relevant observations of QMFs obtained in the laboratory using superconducting systems (vortices) are mentioned. Next, we describe processes related with the interaction of QMFs with opposite directions in terms of the gauge field geometry related to the vector potential. Then, we discuss the use of a Lagrangian density for a scalar field theory involving radiation in order to describe the annihilation of QMFs, claimed to be responsible for the emission of photons with energies corresponding to that of the annihilated magnetic fields. Finally, a possible application of these concepts to the observed variable dynamics of neutron stars is briefly mentioned.

  2. GEO objects spatial density and collision probability in the Earth-centered Earth-fixed (ECEF) coordinate system

    Science.gov (United States)

    Dongfang, Wang; Baojun, Pang; Weike, Xiao; Keke, Peng

    2016-01-01

    The geostationary (GEO) ring is a valuable orbital region contaminated with an alarming number of space debris. Due to its particular orbital characters, the GEO objects spatial distribution is very susceptible to local longitude regions. Therefore the local longitude distribution of these objects in the Earth-centered Earth-fixed (ECEF) coordinate system is much more stable and useful in practical applications than it is in the J2000 inertial coordinate system. In previous studies of space debris environment models, the spatial density is calculated in the J2000 coordinate system, which makes it impossible to identify the spatial distribution in different local longitude regions. For GEO objects, this may bring potent inaccuracy. In order to describe the GEO objects spatial distribution in different local longitude regions, this paper introduced a new method which can provide the spatial density distribution in the ECEF coordinate system. Based on 2014/12/10 two line element (TLE) data provided by the US Space Surveillance Network, the spatial density of cataloged GEO objects are given in the ECEF coordinate system. Combined with the previous studies of "Cube" collision probability evaluation, the GEO region collision probability in the ECEF coordinate system is also given here. The examination reveals that GEO space debris distribution is not uniform by longitude; it is relatively centered about the geopotential wells. The method given in this paper is also suitable for smaller debris in the GEO region. Currently the longitudinal-dependent analysis is not represented in GEO debris models such as ORDEM or MASTER. Based our method the further version of space debris environment engineering model (SDEEM) developed by China will present a longitudinal independent GEO space debris environment description in the ECEF coordinate system.

  3. Efficient simulation of density and probability of large deviations of sum of random vectors using saddle point representations

    CERN Document Server

    Dey, Santanu

    2012-01-01

    We consider the problem of efficient simulation estimation of the density function at the tails, and the probability of large deviations for a sum of independent, identically distributed, light-tailed and non-lattice random vectors. The latter problem besides being of independent interest, also forms a building block for more complex rare event problems that arise, for instance, in queuing and financial credit risk modeling. It has been extensively studied in literature where state independent exponential twisting based importance sampling has been shown to be asymptotically efficient and a more nuanced state dependent exponential twisting has been shown to have a stronger bounded relative error property. We exploit the saddle-point based representations that exist for these rare quantities, which rely on inverting the characteristic functions of the underlying random vectors. We note that these representations reduce the rare event estimation problem to evaluating certain integrals, which may via importance ...

  4. On the Probability Density Function of the Test Statistic for One Nonlinear GLR Detector Arising from fMRI

    Directory of Open Access Journals (Sweden)

    Fangyuan Nan

    2007-01-01

    Full Text Available Recently an important and interesting nonlinear generalized likelihood ratio (GLR detector emerged in functional magnetic resonance imaging (fMRI data processing. However, the study of that detector is incomplete: the probability density function (pdf of the test statistic was draw from numerical simulations without much theoretical support and is therefore, not firmly grounded. This correspondence presents more accurate (asymptotic closed form of the pdf by resorting to a non-central Wishart matrix and by asymptotic expansion of some integrals. It is then confirmed theoretically that the detector does possess constant false alarm rate (CFAR property under some practical regimes of signal to noise ratio (SNR for finite samples and the correct threshold selection method is given, which is very important for real fMRI data processing.

  5. A Delta-Sigma Modulator Using a Non-uniform Quantizer Adjusted for the Probability Density of Input Signals

    Science.gov (United States)

    Kitayabu, Toru; Hagiwara, Mao; Ishikawa, Hiroyasu; Shirai, Hiroshi

    A novel delta-sigma modulator that employs a non-uniform quantizer whose spacing is adjusted by reference to the statistical properties of the input signal is proposed. The proposed delta-sigma modulator has less quantization noise compared to the one that uses a uniform quantizer with the same number of output values. With respect to the quantizer on its own, Lloyd proposed a non-uniform quantizer that is best for minimizing the average quantization noise power. The applicable condition of the method is that the statistical properties of the input signal, the probability density, are given. However, the procedure cannot be directly applied to the quantizer in the delta-sigma modulator because it jeopardizes the modulator's stability. In this paper, a procedure is proposed that determine the spacing of the quantizer with avoiding instability. Simulation results show that the proposed method reduces quantization noise by up to 3.8dB and 2.8dB with the input signal having a PAPR of 16dB and 12dB, respectively, compared to the one employing a uniform quantizer. Two alternative types of probability density function (PDF) are used in the proposed method for the calculation of the output values. One is the PDF of the input signal to the delta-sigma modulator and the other is an approximated PDF of the input signal to the quantizer inside the delta-sigma modulator. Both approaches are evaluated to find that the latter gives lower quantization noise.

  6. Precipitation Study in Inconel 625 Alloy by Positron Annihilation Spectroscopy

    Institute of Scientific and Technical Information of China (English)

    M.Ahmad; W. Ahmad; M.A.Shaikh; Mahmud Ahmad; M.U. Rajput

    2003-01-01

    Precipitation in Inconel 625 alloy has been studied by positron annihilation spectroscopy and electron microscopy. The observeddependence of annihilation characteristics on aging time is attributed to the change of the positron state due to the increaseand decrease of the density and size of the γ″ precipitates. Hardness measurements and lifetime measurements are in goodagreement.

  7. Production bias and cluster annihilation: Why necessary?

    DEFF Research Database (Denmark)

    Singh, B.N.; Trinkaus, H.; Woo, C.H.

    1994-01-01

    the primary cluster density is high. Therefore, a sustained high swelling rate driven by production bias must involve the annihilation of primary clusters at sinks. A number of experimental observations which are unexplainable in terms of the conventional dislocation bias for monointerstitials is...

  8. Contribution from S and P waves in pp annihilation at rest

    CERN Document Server

    Bendiscioli, G; Fontana, A; Montagna, P; Rotondi, A; Salvini, P; Bertin, A; Bruschi, M; Capponi, M; De Castro, S; Donà, R; Galli, D; Giacobbe, B; Marconi, U; Massa, I; Piccinini, M; Cesari, N S; Spighi, R; Vecchi, S; Vagnoni, V M; Villa, M; Vitale, A; Zoccoli, A; Bianconi, A; Bonomi, G; Lodi-Rizzini, E; Venturelli, L; Zenoni, A; Cicalò, C; De Falco, A; Masoni, A; Puddu, G; Serci, S; Usai, G L; Gorchakov, O E; Prakhov, S N; Rozhdestvensky, A M; Tretyak, V I; Poli, M; Gianotti, P; Guaraldo, C; Lanaro, A; Lucherini, V; Petrascu, C; Kudryavtsev, A E; Balestra, F; Bussa, M P; Busso, L; Cerello, P G; Denisov, O Yu; Ferrero, L; Grasso, A; Maggiora, A; Panzarasa, A; Panzieri, D; Tosello, F; Botta, E; Bressani, Tullio; Calvo, D; Costa, S; D'Isep, D; Feliciello, A; Filippi, A; Marcello, S; Mirfakhraee, N; Agnello, M; Iazzi, F; Minetti, B; Tessaro, S

    2001-01-01

    The annihilation frequencies of 19 pp annihilation reactions at rest obtained in different target densities are analysed in order to determine the values of the P-wave annihilation percentage at each target density and the average hadronic branching ratios from P- and S-states. Both the assumptions of linear dependence of the annihilation frequencies on the P-wave annihilation percentage of the protonium state and the approach with the enhancement factors of Batty (1989) are considered. Furthermore the cases of incompatible measurements are discussed. (55 refs).

  9. A biology-driven receptor model for daily pollen allergy risk in Korea based on Weibull probability density function

    Science.gov (United States)

    Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo

    2016-07-01

    Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.

  10. Regression approaches to derive generic and fish group-specific probability density functions of bioconcentration factors for metals.

    Science.gov (United States)

    Tanaka, Taku; Ciffroy, Philippe; Stenberg, Kristofer; Capri, Ettore

    2010-11-01

    In the framework of environmental multimedia modeling studies dedicated to environmental and health risk assessments of chemicals, the bioconcentration factor (BCF) is a parameter commonly used, especially for fish. As for neutral lipophilic substances, it is assumed that BCF is independent of exposure levels of the substances. However, for metals some studies found the inverse relationship between BCF values and aquatic exposure concentrations for various aquatic species and metals, and also high variability in BCF data. To deal with the factors determining BCF for metals, we conducted regression analyses to evaluate the inverse relationships and introduce the concept of probability density function (PDF) for Cd, Cu, Zn, Pb, and As. In the present study, for building the regression model and derive the PDF of fish BCF, two statistical approaches are applied: ordinary regression analysis to estimate a regression model that does not consider the variation in data across different fish family groups; and hierarchical Bayesian regression analysis to estimate fish group-specific regression models. The results show that the BCF ranges and PDFs estimated for metals by both statistical approaches have less uncertainty than the variation of collected BCF data (the uncertainty is reduced by 9%-61%), and thus such PDFs proved to be useful to obtain accurate model predictions for environmental and health risk assessment concerning metals. PMID:20886641

  11. A method for evaluating the expectation value of a power spectrum using the probability density function of phases

    Energy Technology Data Exchange (ETDEWEB)

    Caliandro, G.A.; Torres, D.F.; Rea, N., E-mail: andrea.caliandro@ieec.uab.es, E-mail: dtorres@aliga.ieec.uab.es, E-mail: rea@ieec.uab.es [Institute of Space Sciences (IEEC-CSIC), Campus UAB, Fac. de Ciències, Torre C5, parell, 2a planta 08193 Barcelona (Spain)

    2013-07-01

    Here, we present a new method to evaluate the expectation value of the power spectrum of a time series. A statistical approach is adopted to define the method. After its demonstration, it is validated showing that it leads to the known properties of the power spectrum when the time series contains a periodic signal. The approach is also validated in general with numerical simulations. The method puts into evidence the importance that is played by the probability density function of the phases associated to each time stamp for a given frequency, and how this distribution can be perturbed by the uncertainties of the parameters in the pulsar ephemeris. We applied this method to solve the power spectrum in the case the first derivative of the pulsar frequency is unknown and not negligible. We also undertook the study of the most general case of a blind search, in which both the frequency and its first derivative are uncertain. We found the analytical solutions of the above cases invoking the sum of Fresnel's integrals squared.

  12. A method for evaluating the expectation value of a power spectrum using the probability density function of phases

    International Nuclear Information System (INIS)

    Here, we present a new method to evaluate the expectation value of the power spectrum of a time series. A statistical approach is adopted to define the method. After its demonstration, it is validated showing that it leads to the known properties of the power spectrum when the time series contains a periodic signal. The approach is also validated in general with numerical simulations. The method puts into evidence the importance that is played by the probability density function of the phases associated to each time stamp for a given frequency, and how this distribution can be perturbed by the uncertainties of the parameters in the pulsar ephemeris. We applied this method to solve the power spectrum in the case the first derivative of the pulsar frequency is unknown and not negligible. We also undertook the study of the most general case of a blind search, in which both the frequency and its first derivative are uncertain. We found the analytical solutions of the above cases invoking the sum of Fresnel's integrals squared

  13. Probability densities for the sums of iterates of the sine-circle map in the vicinity of the quasiperiodic edge of chaos

    Science.gov (United States)

    Afsar, Ozgur; Tirnakli, Ugur

    2010-10-01

    We investigate the probability density of rescaled sum of iterates of sine-circle map within quasiperiodic route to chaos. When the dynamical system is strongly mixing (i.e., ergodic), standard central limit theorem (CLT) is expected to be valid, but at the edge of chaos where iterates have strong correlations, the standard CLT is not necessarily valid anymore. We discuss here the main characteristics of the probability densities for the sums of iterates of deterministic dynamical systems which exhibit quasiperiodic route to chaos. At the golden-mean onset of chaos for the sine-circle map, we numerically verify that the probability density appears to converge to a q -Gaussian with q<1 as the golden mean value is approached.

  14. Scaling of maximum probability density function of velocity increments in tur-bulent Rayleigh-Bénard convection

    Institute of Scientific and Technical Information of China (English)

    邱翔; 黄永祥; 周全; 孙超

    2014-01-01

    In this paper, we apply a scaling analysis of the maximum of the probability density function (pdf) of velocity increments, i.e., pmax ( )=max utp ( u ) att t-D D : , for a velocity field of turbulent Rayleigh-Bénard convection obtained at the Taylor-microscale Reynolds number Rel»60 . The scaling exponent a is comparable with that of the first-order velocity structure function, z(1) , in which the large-scale effect might be constrained, showing the background fluctuations of the velocity field. It is found that the integral time T(x/D) scales as T(x/D):(x/D)-b, with a scaling exponent b=0.25±0.01, suggesting the large-scale inhomo-geneity of the flow. Moreover, the pdf scaling exponent a(x, z) is strongly inhomogeneous in the x (horizontal) direction. The vertical-direction-averaged pdf scaling exponent a%( x) obeys a logarithm law with respect to x , the distance from the cell sidewall, with a scaling exponent x»0.22 within the velocity boundary layer and x»0.28 near the cell sidewall. In the cell's central region, a(x, z) fluctuates around 0.37, which agrees well with z(1) obtained in high-Reynolds-number turbulent flows, implying the same intermittent correction. Moreover, the length of the inertial range represented in decade T%I (x) is found to be linearly increasing with the wall distance x with an exponent 0.65±0.05 .

  15. Applications of the line-of-response probability density function resolution model in PET list mode reconstruction

    International Nuclear Information System (INIS)

    Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners—the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [11C]AFM rats imaged on the HRRT and [11C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods. (paper)

  16. Antineutron-nucleus annihilation

    CERN Document Server

    Botta, E

    2001-01-01

    The n-nucleus annihilation process has been studied by the OBELIX experiment at the CERN Low Energy Antiproton Ring (LEAR) in the (50-400) MeV/c projectile momentum range on C, Al, Cu, Ag, Sn, and Pb nuclear targets. A systematic survey of the annihilation cross- section, sigma /sub alpha /(A, p/sub n/), has been performed, obtaining information on its dependence on the target mass number and on the incoming n momentum. For the first time the mass number dependence of the (inclusive) final state composition of the process has been analyzed. Production of the rho vector meson has also been examined. (13 refs).

  17. SUSY dark matter annihilation in the Galactic halo

    CERN Document Server

    Berezinsky, Veniamin; Erohenko, Yury

    2015-01-01

    Neutralino annihilation in the Galactic halo is the most definite observational signature proposed for indirect registration of the SUSY Dark Matter (DM) candidate particles. The corresponding annihilation signal (in the form of gamma-rays, positrons and antiprotons) may be boosted for one or three orders of magnitude due to the clustering of cold DM particles into the small-scale and very dense self-gravitating clumps. We discuss the formation of these clumps from the initial density perturbations and their successive fate in the Galactic halo. Only a small fraction of these clumps, $\\sim0.1$%, in each logarithmic mass interval $\\Delta\\log M\\sim1$ survives the stage of hierarchical clustering. We calculate the probability of surviving the remnants of dark matter clumps in the Galaxy by modelling the tidal destruction of the small-scale clumps by the Galactic disk and stars. It is demonstrated that a substantial fraction of clump remnants may survive through the tidal destruction during the lifetime of the Ga...

  18. Positron annihilation microprobe

    Energy Technology Data Exchange (ETDEWEB)

    Canter, K.F. [Brandeis Univ., Waltham, MA (United States)

    1997-03-01

    Advances in positron annihilation microprobe development are reviewed. The present resolution achievable is 3 {mu}m. The ultimate resolution is expected to be 0.1 {mu}m which will enable the positron microprobe to be a valuable tool in the development of 0.1 {mu}m scale electronic devices in the future. (author)

  19. Extracting risk neutral probability densities by fitting implied volatility smiles: Some methodological points and an application to the 3M Euribor futures option prices

    OpenAIRE

    Andersen, Allan Bødskov; Wagener, Tom

    2002-01-01

    Following Shimko (1993), a large amount of research has evolved around the problem of extracting risk neutral densities from options prices by interpolating the Balck-Scholes implied volatility smile. Some of the methods recently proposed use variants of the cubic spline. Thesee methods have the property of producing non-differentiable probability densities. We argue that this is an undesirable feature and suggest circumventing the problem by fitting a smoothing spline of higher order polynom...

  20. Positron annihilation and speed of sound in the systems containing beta cyclodextrin

    International Nuclear Information System (INIS)

    Positron annihilation measurements were performed in aqueous solutions of beta-cyclodextrin, as well as in solid mixtures of this sugar with a long-chained alcohol, n-nonanol. Additionally, acoustic (sound speed, density and compressibility) experiments were done in aqueous beta-cyclodextrin and tert-butanol systems and in a three-component water-beta-cyclodextrin-tert-butanol system. The results show that in aqueous solution cyclodextrin does not form inclusive complexes with alcohol, while solid sugar-alcohol mixtures undergo slow changes in time, most probably caused by exchange of guest between interior and exterior of the host molecule. (authors)

  1. Effect of Phonotactic Probability and Neighborhood Density on Word-Learning Configuration by Preschoolers with Typical Development and Specific Language Impairment

    Science.gov (United States)

    Gray, Shelley; Pittman, Andrea; Weinhold, Juliet

    2014-01-01

    Purpose: In this study, the authors assessed the effects of phonotactic probability and neighborhood density on word-learning configuration by preschoolers with specific language impairment (SLI) and typical language development (TD). Method: One hundred thirty-one children participated: 48 with SLI, 44 with TD matched on age and gender, and 39…

  2. Sommerfeld enhancement of invisible dark matter annihilation in galaxies and galaxy clusters

    OpenAIRE

    Chan, Man Ho

    2016-01-01

    Recent observations indicate that core-like dark matter structures exist in many galaxies, while numerical simulations reveal a singular dark matter density profile at the center. In this article, I show that if the annihilation of dark matter particles gives invisible sterile neutrinos, the Sommerfeld enhancement of the annihilation cross-section can give a sufficiently large annihilation rate to solve the core-cusp problem. The resultant core density, core radius, and their scaling relation...

  3. Black Hole Window into p-Wave Dark Matter Annihilation.

    Science.gov (United States)

    Shelton, Jessie; Shapiro, Stuart L; Fields, Brian D

    2015-12-01

    We present a new method to measure or constrain p-wave-suppressed cross sections for dark matter (DM) annihilations inside the steep density spikes induced by supermassive black holes. We demonstrate that the high DM densities, together with the increased velocity dispersion, within such spikes combine to make thermal p-wave annihilation cross sections potentially visible in γ-ray observations of the Galactic center (GC). The resulting DM signal is a bright central point source with emission originating from DM annihilations in the absence of a detectable spatially extended signal from the halo. We define two simple reference theories of DM with a thermal p-wave annihilation cross section and establish new limits on the combined particle and astrophysical parameter space of these models, demonstrating that Fermi Large Area Telescope is currently sensitive to thermal p-wave DM over a wide range of possible scenarios for the DM distribution in the GC. PMID:26684108

  4. Bubble chamber: antiproton annihilation

    CERN Multimedia

    1971-01-01

    These images show real particle tracks from the annihilation of an antiproton in the 80 cm Saclay liquid hydrogen bubble chamber. A negative kaon and a neutral kaon are produced in this process, as well as a positive pion. The invention of bubble chambers in 1952 revolutionized the field of particle physics, allowing real tracks left by particles to be seen and photographed by expanding liquid that had been heated to boiling point.

  5. Task 4.1: Development of a framework for creating a databank to generate probability density functions for process parameters

    International Nuclear Information System (INIS)

    PSA analysis should be based on the best available data for the types of equipment and systems in the plant. In some cases very limited data may be available for evolutionary designs or new equipments, especially in the case of passive systems. It has been recognized that difficulties arise in addressing the uncertainties related to the physical phenomena and characterizing the parameters relevant to the passive system performance evaluation, since the unavailability of a consistent operational and experimental data base. This lack of experimental evidence and validated data forces the analyst to resort to expert/engineering judgment to a large extent, thus making the results strongly dependent upon the expert elicitation process. This prompts the need for the development of a framework for constructing a database to generate probability distributions for the parameters influencing the system behaviour. The objective of the task is to develop a consistent framework aimed at creating probability distributions for the parameters relevant to the passive system performance evaluation. In order to achieve this goal considerable experience and engineering judgement are also required to determine which existing data are most applicable to the new systems or which generic data bases or models provide the best information for the system design. Eventually in case of absence of documented specific reliability data, documented expert judgement coming out from a well structured procedure could be used to envisage sound probability distributions for the parameters under interest

  6. The Integral of the Absolute Value of the Pinned Wiener Process-- Calculation of Its Probability Density by Numerical Integration

    OpenAIRE

    Rice, S. O.

    1982-01-01

    L. A. Shepp [1] has studied the distribution of the integral of the absolute value of the pinned Wiener process, and has expressed the moment generating function in terms of a Laplace transform. Here we apply Shepp's results to obtain an integral for the density of the distribution. This integral is then evaluated by numerical integration along a path in the complex plane.

  7. Semi-Annihilating Wino-Like Dark Matter

    CERN Document Server

    Spray, Andrew P

    2015-01-01

    Semi-annihilation is a generic feature of dark matter theories with symmetries larger than Z_2. We explore a model based on a Z_4-symmetric dark sector comprised of a scalar singlet and a "wino"-like fermion SU(2)_L triplet. This is the minimal example of semi-annihilation with a gauge-charged fermion. We study the interplay of the Sommerfeld effect in both annihilation and semi-annihilation channels. The modifications to the relic density allow otherwise-forbidden regions of parameter space and can substantially weaken indirect detection constraints. We perform a parameter scan and find that the entire region where the model comprises all the observed dark matter is accessible to current and planned direct and indirect searches.

  8. Dark Matter Annihilation in the First Galaxy Halos

    CERN Document Server

    Schon, Sarah; Avram, Cassandra A; Wyithe, J Stuart B; Barberio, Elisabetta

    2014-01-01

    We investigate the impact of energy released from self-annihilating dark matter on heating of gas in the small, high-redshift dark matter halos thought to host the first stars. A SUSY neutralino like particle is implemented as our dark matter candidate. The PYTHIA code is used to model the final, stable particle distributions produced during the annihilation process. We use an analytic treatment in conjunction with the code MEDEA2 to find the energy transfer and subsequent partition into heating, ionizing and Lyman alpha photon components. We consider a number of halo density models, dark matter particle masses and annihilation channels. We find that the injected energy from dark matter exceeds the binding energy of the gas within a $10^5$ - $10^6$ M$_\\odot$ halo at redshifts above 20, preventing star formation in early halos in which primordial gas would otherwise cool. Thus we find that DM annihilation could delay the formation of the first galaxies.

  9. Dark matter annihilation in the first galaxy haloes

    Science.gov (United States)

    Schön, S.; Mack, K. J.; Avram, C. A.; Wyithe, J. S. B.; Barberio, E.

    2015-08-01

    We investigate the impact of energy released from self-annihilating dark matter (DM) on heating of gas in the small, high-redshift DM haloes thought to host the first stars. A supersymmetric (SUSY)-neutralino-like particle is implemented as our DM candidate. The PYTHIA code is used to model the final, stable particle distributions produced during the annihilation process. We use an analytic treatment in conjunction with the code MEDEA2 to find the energy transfer and subsequent partition into heating, ionizing and Lyman α photon components. We consider a number of halo density models, DM particle masses and annihilation channels. We find that the injected energy from DM exceeds the binding energy of the gas within a 105-106 M⊙ halo at redshifts above 20, preventing star formation in early haloes in which primordial gas would otherwise cool. Thus we find that DM annihilation could delay the formation of the first galaxies.

  10. Some further ideas on the systematic variation of the positron annihilation parameters in metals

    International Nuclear Information System (INIS)

    A new systematic correlation was found between some positron annihilation parameters and the electron density of the elements. An estimation of the S-electron density in transition metals has been made. (author)

  11. Dark matter annihilation near a black hole: Plateau versus weak cusp

    International Nuclear Information System (INIS)

    Dark matter annihilation in so-called spikes near black holes is believed to be an important method of indirect dark matter detection. In the case of circular particle orbits, the density profile of dark matter has a plateau at small radii, the maximal density being limited by the annihilation cross section. However, in the general case of arbitrary velocity anisotropy the situation is different. Particularly, for isotropic velocity distribution the density profile cannot be shallower than r-1/2 in the very center. Indeed, a detailed study reveals that in many cases the term ''annihilation plateau'' is misleading, as the density actually continues to rise towards small radii and forms a weak cusp, ρ∝r-(β+1/2), where β is the anisotropy coefficient. The annihilation flux, however, does not change much in the latter case, if averaged over an area larger than the annihilation radius

  12. Neutrino annihilation in hot plasma

    International Nuclear Information System (INIS)

    We consider neutrino annihilation in a heat bath, including annihilation via the photon. We show that the annihilation cross section has high and narrow peaks corresponding to a plasmon resonance. This yields an enormous enhancement factor of O(108) in the differential cross section as compared with the purely weak contribution. We also evaluate numerically the thermally averaged neutrino annihilation rate per particle in the heat bath of the early universe to be +e-)>≅2.93GF2T2. We have accounted for the final-state blocking factors as well as for the fact that the center-of-mass frame of collisions is not necessarily the rest frame of the heat bath. Despite the resonances, electromagnetic processes represent only a minor effect in the averaged annihilation rate. (orig.)

  13. Neutrino annihilation in hot plasma

    International Nuclear Information System (INIS)

    We consider neutrino annihilation in a heat bath, including annihilation via the photon. We show that the annihilation cross section has high and narrow peaks corresponding to a plasmon resonance. This yields an enormous enhancement factor of O(108) in the differential cross section as compared with the purely weak contribution. We also evaluate numerically the thermally averaged neutrino annihilation rate per particle in the heat bath of the early Universe to be +e-)> ≅ 2.93GF2T2. We have accounted for the final state blocking factors as well as for the fact that the center-of-mass frame of collisions is not necessarily the rest frame of the heat bath. Despite the resonances, electromagnetic processes represent only a minor effect in the averaged annihilation rate. (orig.)

  14. A Herschel - SPIRE Survey of the Mon R2 Giant Molecular Cloud: Analysis of the Gas Column Density Probability Density Function

    Science.gov (United States)

    Pokhrel, R.; Gutermuth, R.; Ali, B.; Megeath, T.; Pipher, J.; Myers, P.; Fischer, W. J.; Henning, T.; Wolk, S. J.; Allen, L.; Tobin, J. J.

    2016-06-01

    We present a far-IR survey of the entire Mon R2 GMC with Herschel - SPIRE cross-calibrated with Planck - HFI data. We fit the SEDs of each pixel with a greybody function and an optimal beta value of 1.8. We find that mid-range column densities obtained from far-IR dust emission and near-IR extinction are consistent. For the entire GMC, we find that the column density histogram, or N-PDF, is lognormal below ˜ 1021 cm-2. Above this value, the distribution takes a power law form with an index of -2.15. We analyze the gas geometry, N-PDF shape, and YSO content of a selection of subregions in the cloud. We find no regions with pure lognormal N-PDFs. The regions with a combination of lognormal and one power law N-PDF have a YSO cluster and a corresponding centrally concentrated gas clump. The regions with a combination of lognormal and two power law N-PDF have significant numbers of typically younger YSOs but no prominent YSO cluster. These regions are composed of an aggregate of closely spaced gas filaments with no concentrated dense gas clump. We find that for our fixed scale regions, the YSO count roughly correlates with the N-PDF power law index. The correlation appears steeper for single power law regions relative to two power law regions with a high column density cut-off, as a greater dense gas mass fraction is achieved in the former. A stronger correlation is found between embedded YSO count and the dense gas mass among our regions.

  15. A $Herschel-SPIRE$ Survey of the Mon R2 Giant Molecular Cloud: Analysis of the Gas Column Density Probability Density Function

    CERN Document Server

    Pokhrel, R; Ali, B; Megeath, T; Pipher, J; Myers, P; Fischer, W J; Henning, T; Wolk, S J; Allen, L; Tobin, J J

    2016-01-01

    We present a far-IR survey of the entire Mon R2 GMC with $Herschel-SPIRE$ cross-calibrated with $Planck-HFI$ data. We fit the SEDs of each pixel with a greybody function and an optimal beta value of 1.8. We find that mid-range column densities obtained from far-IR dust emission and near-IR extinction are consistent. For the entire GMC, we find that the column density histogram, or N-PDF, is lognormal below $\\sim$10$^{21}$ cm$^{-2}$. Above this value, the distribution takes a power law form with an index of -2.16. We analyze the gas geometry, N-PDF shape, and YSO content of a selection of subregions in the cloud. We find no regions with pure lognormal N-PDFs. The regions with a combination of lognormal and one power law N-PDF have a YSO cluster and a corresponding centrally concentrated gas clump. The regions with a combination of lognormal and two power law N-PDF have significant numbers of typically younger YSOs but no prominent YSO cluster. These regions are composed of an aggregate of closely spaced gas fi...

  16. H2: entanglement, probability density function, confined Kratzer oscillator, universal potential and (Mexican hat- or bell-type) potential energy curves

    CERN Document Server

    Van Hooydonk, G

    2011-01-01

    We review harmonic oscillator theory for closed, stable quantum systems. The H2 potential energy curve (PEC) of Mexican hat-type, calculated with a confined Kratzer oscillator, is better than the Rydberg-Klein-Rees (RKR) H2 PEC. Compared with QM, the theory of chemical bonding is simplified, since a confined Kratzer oscillator gives the long sought for universal function, once called the Holy Grail of Molecular Spectroscopy. This is validated with HF, I2, N2 and O2 PECs. We quantify the entanglement of spatially separated H2 quantum states, which gives a braid view. The equal probability for H2, originating either from HA+HB or HB+HA, is quantified with a Gauss probability density function. At the Bohr scale, confined harmonic oscillators behave properly at all extremes of bound two-nucleon quantum systems and are likely to be useful also at the nuclear scale.

  17. Influence of the level of fit of a density probability function to wind-speed data on the WECS mean power output estimation

    Energy Technology Data Exchange (ETDEWEB)

    Carta, Jose A. [Department of Mechanical Engineering, University of Las Palmas de Gran Canaria, Campus de Tafira s/n, 35017 Las Palmas de Gran Canaria, Canary Islands (Spain); Ramirez, Penelope; Velazquez, Sergio [Department of Renewable Energies, Technological Institute of the Canary Islands, Pozo Izquierdo Beach s/n, 35119 Santa Lucia, Gran Canaria, Canary Islands (Spain)

    2008-10-15

    Static methods which are based on statistical techniques to estimate the mean power output of a WECS (wind energy conversion system) have been widely employed in the scientific literature related to wind energy. In the static method which we use in this paper, for a given wind regime probability distribution function and a known WECS power curve, the mean power output of a WECS is obtained by resolving the integral, usually using numerical evaluation techniques, of the product of these two functions. In this paper an analysis is made of the influence of the level of fit between an empirical probability density function of a sample of wind speeds and the probability density function of the adjusted theoretical model on the relative error {epsilon} made in the estimation of the mean annual power output of a WECS. The mean power output calculated through the use of a quasi-dynamic or chronological method, that is to say using time-series of wind speed data and the power versus wind speed characteristic of the wind turbine, serves as the reference. The suitability of the distributions is judged from the adjusted R{sup 2} statistic (R{sub a}{sup 2}). Hourly mean wind speeds recorded at 16 weather stations located in the Canarian Archipelago, an extensive catalogue of wind-speed probability models and two wind turbines of 330 and 800 kW rated power are used in this paper. Among the general conclusions obtained, the following can be pointed out: (a) that the R{sub a}{sup 2} statistic might be useful as an initial gross indicator of the relative error made in the mean annual power output estimation of a WECS when a probabilistic method is employed; (b) the relative errors tend to decrease, in accordance with a trend line defined by a second-order polynomial, as R{sub a}{sup 2} increases. (author)

  18. Influence of the level of fit of a density probability function to wind-speed data on the WECS mean power output estimation

    International Nuclear Information System (INIS)

    Static methods which are based on statistical techniques to estimate the mean power output of a WECS (wind energy conversion system) have been widely employed in the scientific literature related to wind energy. In the static method which we use in this paper, for a given wind regime probability distribution function and a known WECS power curve, the mean power output of a WECS is obtained by resolving the integral, usually using numerical evaluation techniques, of the product of these two functions. In this paper an analysis is made of the influence of the level of fit between an empirical probability density function of a sample of wind speeds and the probability density function of the adjusted theoretical model on the relative error ε made in the estimation of the mean annual power output of a WECS. The mean power output calculated through the use of a quasi-dynamic or chronological method, that is to say using time-series of wind speed data and the power versus wind speed characteristic of the wind turbine, serves as the reference. The suitability of the distributions is judged from the adjusted R2 statistic (Ra2). Hourly mean wind speeds recorded at 16 weather stations located in the Canarian Archipelago, an extensive catalogue of wind-speed probability models and two wind turbines of 330 and 800 kW rated power are used in this paper. Among the general conclusions obtained, the following can be pointed out: (a) that the Ra2 statistic might be useful as an initial gross indicator of the relative error made in the mean annual power output estimation of a WECS when a probabilistic method is employed; (b) the relative errors tend to decrease, in accordance with a trend line defined by a second-order polynomial, as Ra2 increases

  19. 基于新概率密度函数的ICA盲源分离%ICA Blind Signal Separation Based on a New Probability Density Function

    Institute of Scientific and Technical Information of China (English)

    张娟娟; 邸双亮

    2014-01-01

    This paper is concerned with the blind source separation (BSS) problem of super-Gaussian and sub-Gaussian mixed signal by using the maximum likelihood method, which is based on independent component analysis (ICA) method. In this paper, we construct a new type of probability density function (PDF) which is different from the already existing PDF used to separate mixed signals in the previously published papers. Applying the new constructed PDF to estimate probability density of super-Gaussian and sub-Gaussian signals (assuming the source signals are independent of each other), it is not necessary to change the parameter values artificially, and the separation work may be performed adaptively. Numerical experiments verify the feasibility of the newly constructed PDF, and the convergence time and the separation effect are improved compared with the original algorithm.%基于独立分量分析(Independent Component Analysis, ICA),利用极大似然估计法,研究了超高斯和亚高斯的混合信号的盲源分离(Blind Sources Separation, BSS)问题。文中构造了一种新的、不同于以往文章中用来分离混合信号的概率密度函数(Probability Density Function, PDF)。新构造的PDF无需改变函数中的参数值,可用来对于超高斯和亚高斯信号的概率密度进行估计(假设未知源信号是相互独立的)。数值实验验证了新构造的PDF的可行性,与原算法相比,收敛时间和分离效果都得到了较大的改善。

  20. Search for t bar t production in the e+jets channel using the probability density estimation (PDE) method at D0

    International Nuclear Information System (INIS)

    The authors construct probability density functions for signal and background events in multi-dimensional space, using Monte Carlo samples. A variant of the Bayes' discriminant function is then applied to classify signal and background events. The effect of some kinematic quantities on the performance of the discriminant has been studied and the results of applying the PDE method to search for the top quark in D0 data (p bar p collisions at √s = 1.8 TeV) will be presented

  1. Determination and applications of enhancement factors for positron and ortho-positronium annihilations

    International Nuclear Information System (INIS)

    Electron-positron annihilation rates calculated directly from the electron and positron densities are known to underestimate the true annihilation rate. A correction factor, known as the enhancement factor, allows for the local increase of the electron density around the positron caused by the attractive electron-positron interaction. Enhancement factors are given for positrons annihilating with the 1s electron in H, He+, He, Li2+, and Li+. The enhancement factor for a free positron annihilating with He+ and He is found to be close to that of ortho-positronium (i.e., Ps in its triplet state) annihilating with these atoms. The enhancement factor for Ps-He scattering is used in conjunction with the known annihilation rate for pickoff annihilation to derive a scattering length of 1.47a0 for Ps-He scattering. Further, enhancement factors for e+-Ne and e+-Ar annihilation are used in conjunction with the pickoff annihilation rate to estimate scattering lengths of 1.46a0 for Ps-Ne scattering and 1.75a0 for Ps-Ar scattering

  2. Ionization compression impact on dense gas distribution and star formation, Probability density functions around H ii regions as seen by Herschel

    CERN Document Server

    Tremblin, P; Minier, V; Didelon, P; Hill, T; Anderson, L D; Motte, F; Zavagno, A; André, Ph; Arzoumanian, D; Audit, E; Benedettini, M; Bontemps, S; Csengeri, T; Di Francesco, J; Giannini, T; Hennemann, M; Luong, Q Nguyen; Marston, A P; Peretto, N; Rivera-Ingraham, A; Russeil, D; Rygl, K L J; Spinoglio, L; White, G J

    2014-01-01

    Ionization feedback should impact the probability distribution function (PDF) of the column density around the ionized gas. We aim to quantify this effect and discuss its potential link to the Core and Initial Mass Function (CMF/IMF). We used in a systematic way Herschel column density maps of several regions observed within the HOBYS key program: M16, the Rosette and Vela C molecular cloud, and the RCW 120 H ii region. We fitted the column density PDFs of all clouds with two lognormal distributions, since they present a double-peak or enlarged shape in the PDF. Our interpretation is that the lowest part of the column density distribution describes the turbulent molecular gas while the second peak corresponds to a compression zone induced by the expansion of the ionized gas into the turbulent molecular cloud. The condensations at the edge of the ionized gas have a steep compressed radial profile, sometimes recognizable in the flattening of the power-law tail. This could lead to an unambiguous criterion able t...

  3. Dark Matter Annihilation at the Galactic Center

    Energy Technology Data Exchange (ETDEWEB)

    Linden, Timothy Ryan [Univ. of California, Santa Cruz, CA (United States)

    2013-06-01

    Observations by the WMAP and PLANCK satellites have provided extraordinarily accurate observations on the densities of baryonic matter, dark matter, and dark energy in the universe. These observations indicate that our universe is composed of approximately ve times as much dark matter as baryonic matter. However, e orts to detect a particle responsible for the energy density of dark matter have been unsuccessful. Theoretical models have indicated that a leading candidate for the dark matter is the lightest supersymmetric particle, which may be stable due to a conserved R-parity. This dark matter particle would still be capable of interacting with baryons via weak-force interactions in the early universe, a process which was found to naturally explain the observed relic abundance of dark matter today. These residual annihilations can persist, albeit at a much lower rate, in the present universe, providing a detectable signal from dark matter annihilation events which occur throughout the universe. Simulations calculating the distribution of dark matter in our galaxy almost universally predict the galactic center of the Milky Way Galaxy (GC) to provide the brightest signal from dark matter annihilation due to its relative proximity and large simulated dark matter density. Recent advances in telescope technology have allowed for the rst multiwavelength analysis of the GC, with suitable e ective exposure, angular resolution, and energy resolution in order to detect dark matter particles with properties similar to those predicted by the WIMP miracle. In this work, I describe ongoing e orts which have successfully detected an excess in -ray emission from the region immediately surrounding the GC, which is di cult to describe in terms of standard di use emission predicted in the GC region. While the jury is still out on any dark matter interpretation of this excess, I describe several related observations which may indicate a dark matter origin. Finally, I discuss the

  4. Monomer Migration and Annihilation Processes

    Institute of Scientific and Technical Information of China (English)

    KE Jian-Hong; LIN Zhen-Quan; ZHUANG You-Yi

    2005-01-01

    We propose a two-species monomer migration-annihilation model, in which monomer migration reactions occur between any two aggregates of the same species and monomer annihilation reactions occur between two different species. Based on the mean-field rate equations, we investigate the evolution behaviors of the processes. For the case with an annihilation rate kernel proportional to the sizes of the reactants, the aggregation size distribution of either species approaches the modified scaling form in the symmetrical initial case, while for the asymmetrical initial case the heavy species with a large initial data scales according to the conventional form and the light one does not scale. Moreover,at most one species can survive finally. For the case with aconstant annihilation rate kernel, both species may scale according to the conventional scaling law in the symmetrical case and survive together at the end.

  5. Positron annihilation studies of organic superconductivity

    International Nuclear Information System (INIS)

    The positron lifetimes of two organic superconductors, κ-(ET)2Cu(NCS)2 and κ-(ET)2Cu[N(CN)2]Br, are measured as a function of temperature across Tc. A drop of positron lifetime below Tc is observed. Positron-electron momentum densities are measured by using 2D-ACAR to search for the Fermi surface in κ-(ET)2Cu[N(CN)2]Br. Positron density distributions and positron-electron overlaps are calculated by using the orthogonalized linear combination atomic orbital (OLCAO) method to interprete the temperature dependence due to the local charge transfer which is inferred to relate to the superconducting transition. 2D-ACAR results in κ-(ET)2Cu[N(CN)2]Br are compared with theoretical band calculations based on a first-principles local density approximation. Importance of performing accurate band calculations for the interpretation of positron annihilation data is emphasized

  6. Applications of slow positrons to cancer research: Search for selectivity of positron annihilation to skin cancer

    Energy Technology Data Exchange (ETDEWEB)

    Jean, Y.C. [Department of Chemistry, University of Missouri-Kansas City, 205 Spenscer Chemistry Building, 5009 Rockhill Road, Kansas City, MO 64110 (United States)]. E-mail: jeany@umkc.edu; Li Ying [Department of Chemistry, University of Missouri-Kansas City, 205 Spenscer Chemistry Building, 5009 Rockhill Road, Kansas City, MO 64110 (United States); Liu Gaung [Department of Chemistry, University of Missouri-Kansas City, 205 Spenscer Chemistry Building, 5009 Rockhill Road, Kansas City, MO 64110 (United States); Chen, Hongmin [Department of Chemistry, University of Missouri-Kansas City, 205 Spenscer Chemistry Building, 5009 Rockhill Road, Kansas City, MO 64110 (United States); Zhang Junjie [Department of Chemistry, University of Missouri-Kansas City, 205 Spenscer Chemistry Building, 5009 Rockhill Road, Kansas City, MO 64110 (United States); Gadzia, Joseph E. [Dermatology, Department of Internal Medicine, University of Kansas Medical Center, Kansas City, KS 66103 (United States); Kansas Medical Clinic, Topeka, KS 66614 (United States)

    2006-02-28

    Slow positrons and positron annihilation spectroscopy (PAS) have been applied to medical research in searching for positron annihilation selectivity to cancer cells. We report the results of positron lifetime and Doppler broadening energy spectroscopies in human skin samples with and without cancer as a function of positron incident energy (up to 8 {mu}m depth) and found that the positronium annihilates at a significantly lower rate and forms at a lower probability in the samples having either basal cell carcinoma (BCC) or squamous cell carcinoma (SCC) than in the normal skin. The significant selectivity of positron annihilation to skin cancer may open a new research area of developing positron annihilation spectroscopy as a novel medical tool to detect cancer formation externally and non-invasively at the early stages.

  7. Modeling of positron states and annihilation in solids

    International Nuclear Information System (INIS)

    Theoretical models and computational aspects to describe positron states and to predict positron annihilation characteristics in solids are discussed. The comparison of the calculated positron lifetimes, core annihilation lineshapes, and two-dimensional angular correlation maps with experimental results are used in identifying the structure (including the chemical composition) of vacancy-type defects and their development e.g. during thermal annealing. The basis of the modeling is the two-component density-functional theory. The ensuing approximations and the state-of-the-art electronic-structure computation methods enable practical schemes with a quantitative predicting power. (author)

  8. Relativistic hydrodynamics, heavy ion reactions and antiproton annihilation

    International Nuclear Information System (INIS)

    The application of relativistic hydrodynamics to relativistic heavy ions and antiproton annihilation is summarized. Conditions for validity of hydrodynamics are presented. Theoretical results for inclusive particle spectra, pion production and flow analysis are given for medium energy heavy ions. The two-fluid model is introduced and results presented for reactions from 800 MeV per nucleon to 15 GeV on 15 GeV per nucleon. Temperatures and densities attained in antiproton annihilation are given. Finally, signals which might indicate the presence of a quark-gluon plasma are briefly surveyed

  9. The Derivation of the Probability Density Function of the t Distribution%t 分布概率密度的分析

    Institute of Scientific and Technical Information of China (English)

    彭定忠; 张映辉; 刘朝才

    2012-01-01

      t 分布是数理统计中应用广泛的3个重要分布之一,大多数教材没有或仅用直接法推导其概率密度,本文采用变换法推导,简化了运算过程,降低了计算难度。%  The t distribution is one of three most important distributions which are applied widely in mathematically statistical analysis, most of the teaching material not including or only use the direct method to derivate the probability density function of the distribution. In this paper, the transform method which features a more simple operation and less difficult computation is presented for derivation.

  10. Electron tunneling of photochemical reactions on metal surfaces: Nonequilibrium Green's function-density functional theory approach to photon energy dependence of reaction probability

    International Nuclear Information System (INIS)

    We have developed a theoretical model of photoinduced reactions on metal surfaces initiated by the substrate/indirect excitation mechanism using the nonequilibrium Green's function approach. We focus on electron transfer, which consists of (1) electron-hole pair creation, (2) transport of created hot electrons, and (3) tunneling of hot electrons to form an anion resonance. We assume that steps (1), (2), and (3) are separable. By this assumption, the electron dynamics might be restated as a tunneling problem of an open system. Combining the Keldysh time-independent formalism with the simple transport theory introduced by Berglund and Spicer, we present a practical scheme for first-principle calculation of the reaction probability as a function of incident photon energy. The method is illustrated by application to the photoinduced desorption/dissociation of O2 on a Ag(110) surface by adopting density functional theory

  11. Lexicographic probability, conditional probability, and nonstandard probability

    OpenAIRE

    Halpern, Joseph Y.

    2003-01-01

    The relationship between Popper spaces (conditional probability spaces that satisfy some regularity conditions), lexicographic probability systems (LPS's), and nonstandard probability spaces (NPS's) is considered. If countable additivity is assumed, Popper spaces and a subclass of LPS's are equivalent; without the assumption of countable additivity, the equivalence no longer holds. If the state space is finite, LPS's are equivalent to NPS's. However, if the state space is infinite, NPS's are ...

  12. The influence of antioxidant on positron annihilation in polypropylene

    International Nuclear Information System (INIS)

    The purpose of this report is to check the influence of the carbonyl groups (CG), created by oxygen naturally dissolved in a polymer matrix and by the source irradiation, on annihilation characteristics of free positrons using the positron annihilation lifetime spectroscopy (PALS) and coincidence Doppler-broadening spectroscopy (CDBS). Positron annihilation in a pure polypropylene (PP) and in an antioxidant-containing polypropylene (PPA) sample at room and low temperatures has been studied by CDBS. PALS has been used as an o-Ps (orth-positronium) formation monitor. The momentum density distributions of electrons obtained by CDBS at the beginning of measurements have been compared to that at the o-Ps intensity saturation level. It has been shown that the initial concentration of carbonyl groups in a PP sample is high, while for an antioxidant-containing sample, PPA, carbonyl groups are not detected by CDBS. CDBS spectra for a PP can be explained by annihilation of free positrons with the oxygen contained in the carbonyl groups. For a PPA sample, no significant contribution of annihilation with oxygen core electrons can be concluded. (Y. Kazumata)

  13. First star formation with dark matter annihilation

    CERN Document Server

    Ripamonti, E; Ferrara, A; Schneider, R; Bressan, A; Marigo, P

    2010-01-01

    We study the effects of WIMP Dark Matter Annihilations (DMAs) on the evolution of primordial gas clouds hosting the first stars. We follow the collapse of gas and DM within a 1e6 Msun halo virializing at redshift z=20, from z=1000 to slightly before the formation of a hydrostatic core, properly including gas heating/cooling and chemistry processes induced by DMAs, and exploring the dependency of the results on different parameters (DM particle mass, self-annihilation cross section, gas opacity, feedback strength). Independently of such parameters, when the central baryon density, n_c, is lower than the critical density, n_crit ~1e9-1e13 #/cm^3, corresponding to a model-dependent balance between DMA energy input and gas cooling rate, DMA ionizations catalyze an increase in the H2 abundance by a factor ~100. The increased cooling moderately reduces the temperature (by ~30%) but does not significantly reduce the fragmentation mass scale. For n_c > n_crit, the DMA energy injection exceeds the cooling, with the ex...

  14. Weak annihilation cusp inside the dark matter spike about a black hole

    OpenAIRE

    Shapiro, Stuart L.; Shelton, Jessie

    2016-01-01

    We reinvestigate the effect of annihilations on the distribution of collisionless dark matter (DM) in a spherical density spike around a massive black hole. We first construct a very simple, pedagogic, analytic model for an isotropic phase space distribution function that accounts for annihilation and reproduces the "weak cusp" found by Vasiliev for DM deep within the spike and away from its boundaries. The DM density in the cusp varies as $r^{-1/2}$ for $s$-wave annihilation, where $r$ is th...

  15. Development of a pico-second life-time spectrometer for positron annihilation studies

    International Nuclear Information System (INIS)

    Positron annihilation technique is a sensitive probe to investigate various physico-chemical phenomena due to the ability to provide information about the electron momentum and density in any medium. While measurements on the Doppler broadening and angular correlation of annihilation photons provide information about the electron momentum, the electron density at the annihilation site is obtained, by the positron life-time measurement. This report describes the development, optimization and calibration of a high resolution life-time spectrometer (FWHM=230 ps), based on fast-fast coincidence technique, a relatively new concept in nuclear timing spectroscopy. (author). 4 refs., 9 figs., 1 tab

  16. Confronting Galactic center and dwarf spheroidal gamma-ray observations with cascade annihilation models

    CERN Document Server

    Dutta, Bhaskar; Ghosh, Tathagata; Strigari, Louis E

    2015-01-01

    Many particle dark matter models predict that the dark matter undergoes cascade annihilations, i.e. the annihilation products are 4-body final states. In the context of model-independent cascade annihilation models, we study the compatibility of the dark matter interpretation of the Fermi- LAT Galactic center gamma-ray emission with null detections from dwarf spheroidal galaxies. For canonical values of the Milky Way density profile and the local dark matter density, we find that the dark matter interpretation to the Galactic center emission is strongly constrained. However, uncertainties in the dark matter distribution weaken the constraints and leave open dark matter interpretations over a wide range of mass scales.

  17. Sommerfeld enhancement of invisible dark matter annihilation in galaxies and galaxy clusters

    CERN Document Server

    Chan, Man Ho

    2016-01-01

    Recent observations indicate that core-like dark matter structures exist in many galaxies, while numerical simulations reveal a singular dark matter density profile at the center. In this article, I show that if the annihilation of dark matter particles gives invisible sterile neutrinos, the Sommerfeld enhancement of the annihilation cross-section can give a sufficiently large annihilation rate to solve the core-cusp problem. The resultant core density, core radius, and their scaling relation generally agree with recent empirical fits from observations. Also, this model predicts that the resultant core-like structures in dwarf galaxies can be easily observed, but not for large normal galaxies and galaxy clusters.

  18. Sommerfeld enhancement of invisible dark matter annihilation in galaxies and galaxy clusters

    Science.gov (United States)

    Chan, Man Ho

    2016-07-01

    Recent observations indicate that core-like dark matter structures exist in many galaxies, while numerical simulations reveal a singular dark matter density profile at the center. In this article, I show that if the annihilation of dark matter particles gives invisible sterile neutrinos, the Sommerfeld enhancement of the annihilation cross-section can give a sufficiently large annihilation rate to solve the core-cusp problem. The resultant core density, core radius, and their scaling relation generally agree with recent empirical fits from observations. Also, this model predicts that the resultant core-like structures in dwarf galaxies can be easily observed, but not for large normal galaxies and galaxy clusters.

  19. Moments of the Hilbert-Schmidt probability distributions over determinants of real two-qubit density matrices and of their partial transposes

    CERN Document Server

    Slater, Paul B

    2010-01-01

    The nonnegativity of the determinant of the partial transpose of a two-qubit (4 x 4) density matrix is both a necessary and sufficient condition for its separability. While the determinant is restricted to the interval [0,1/256], the determinant of the partial transpose can range over [-1/16,1/256], with negative values corresponding to entangled states. We report here the exact values of the first nine moments of the probability distribution of the partial transpose over this interval, with respect to the Hilbert-Schmidt (metric volume element) measure on the nine-dimensional convex set of real two-qubit density matrices. Rational functions C_{2 j}(m), yielding the coefficients of the 2j-th power of even polynomials occurring at intermediate steps in our derivation of the m-th moment, emerge. These functions possess poles at finite series of consecutive half-integers (m=-3/2,-1/2,...,(2j-1)/2), and certain (trivial) roots at finite series of consecutive natural numbers (m=0, 1,...). Additionally, the (nontri...

  20. Positron annihilation in SiO2

    International Nuclear Information System (INIS)

    Defects in different types of crystalline and fused quartz have been studied by conventional coincidence positron annihilation and optical absorption technique before and after 60Co gamma irradiation with 500 krad, 2 Mrad and 15.8 Mrad. Samples of synthetic powdered quartz (SPQ), natural quartz (NQ), low-OH synthetic monocrystal quartz (LSMQ), high-OH-fused quartz (HFQ) and low-OH fused quartz (LFQ) have been investigated. Two- and three-component analysis of the positron lifetime spectra have been applied. Data on lifetime (τ), intensities (I) and mean lifetimes have been obtained by exponential fitting of spectra. In non-irradiated SPQ and LSMQ big differences in the values of I2 (1.53% vs. 16.0%) and τ2 (1460 ps vs. 478 ps) have been noticed. This is explained by an increased number of dislocations in the synthetic quartz. The τ2 is interpreted as apparent mixed lifetime of two pick-off annihilation of oPs and positron annihilation in micro cracks. The values of τ1 in HFQ (178 ps) and in LHQ (173 ps) are attributed to positron annihilation in small crystalline areas in the glass. Because of the sharp increase in Ps formation probability in amorphous state, the longest component intensity I3 in these samples is of the order of 50%. After gamma irradiation, a creation of coloured centres has been observed only in SPQ and LFQ., which is connected with Al substitutional impurity. The newly detected diffused band at 215 nm in UV-spectra of irradiated LFQ is attributed to a positively charged oxygen vacancy (E'1 centre) which explains the lack of difference between the parameters of irradiated and non irradiated LFQ. The increased mean positron lifetime of irradiated HFQ is ascribed to creation of negatively charged defects able to trap positrons. Except for HFQ, all samples have surprisingly shown a decrease, although slight, in their mean positron lifetime values after low dose irradiation. The authors ascribe this to possible self-annealing of some defects due

  1. Biological effectiveness of antiproton annihilation

    CERN Document Server

    Holzscheiter, Michael H.; Bassler, Niels; Beyer, Gerd; De Marco, John J.; Doser, Michael; Ichioka, Toshiyasu; Iwamoto, Keisuke S.; Knudsen, Helge V.; Landua, Rolf; Maggiore, Carl; McBride, William H.; Møller, Søren Pape; Petersen, Jorgen; Smathers, James B.; Skarsgard, Lloyd D.; Solberg, Timothy D.; Uggerhøj, Ulrik I.; Withers, H.Rodney; Vranjes, Sanja; Wong, Michelle; Wouters, Bradly G.

    2004-01-01

    We describe an experiment designed to determine whether or not the densely ionizing particles emanating from the annihilation of antiprotons produce an increase in “biological dose” in the vicinity of the narrow Bragg peak for antiprotons compared to protons. This experiment is the first direct measurement of the biological effects of antiproton annihilation. The experiment has been approved by the CERN Research Board for running at the CERN Antiproton Decelerator (AD) as AD-4/ACE (Antiproton Cell Experiment) and has begun data taking in June of 2003. The background, description and the current status of the experiment are given.

  2. Positron Annihilation 3-D Momentum Spectrometry by Synchronous 2D-ACAR and DBAR

    Science.gov (United States)

    Burggraf, Larry W.; Bonavita, Angelo M.; Williams, Christopher S.; Fagan-Kelly, Stefan B.; Jimenez, Stephen M.

    2015-05-01

    A positron annihilation spectroscopy system capable of determining 3D electron-positron (e--e+) momentum densities has been constructed and tested. In this technique two opposed HPGe strip detectors measure angular coincidence of annihilation radiation (ACAR) and Doppler broadening of annihilation radiation (DBAR) in coincidence to produce 3D momentum datasets in which the parallel momentum component obtained from the DBAR measurement can be selected for annihilation events that possess a particular perpendicular momentum component observed in the 2D ACAR spectrum. A true 3D momentum distribution can also be produced. Measurement of 3-D momentum spectra in oxide materials has been demonstrated including O-atom defects in 6H SiC and silver atom substitution in lithium tetraborate crystals. Integration of the 3-D momentum spectrometer with a slow positron beam for future surface resonant annihilation spectrometry measurements will be described. Sponsorship from Air Force Office of Scientific Research

  3. D-brane scattering and annihilation

    CERN Document Server

    D'Amico, Guido; Kleban, Matthew; Schillo, Marjorie

    2014-01-01

    We study the dynamics of parallel brane-brane and brane-antibrane scattering in string theory in flat spacetime, focusing on the pair production of open strings that stretch between the branes. We are particularly interested in the case of scattering at small impact parameter $b < l_s$, where there is a tachyon in the spectrum when a brane and an antibrane approach within a string length. Our conclusion is that despite the tachyon, branes and antibranes can pass through each other with only a very small probability of annihilating, so long as $g_s$ is small and the relative velocity $v$ is neither too small nor too close to 1. Our analysis is relevant also to the case of charged open string production in world-volume electric fields, and we make use of this T-dual scenario in our analysis. We briefly discuss the application of our results to a stringy model of inflation involving moving branes.

  4. Biological effectiveness of antiproton annihilation

    DEFF Research Database (Denmark)

    Holzscheiter, M.H.; Agazaryan, N.; Bassler, Niels;

    2004-01-01

    We describe an experiment designed to determine whether or not the densely ionizing particles emanating from the annihilation of antiprotons produce an increase in ‘‘biological dose’’ in the vicinity of the narrow Bragg peak for antiprotons compared to protons. This experiment is the first direct...

  5. New evolution equations for the joint response-excitation probability density function of stochastic solutions to first-order nonlinear PDEs

    Science.gov (United States)

    Venturi, D.; Karniadakis, G. E.

    2012-08-01

    By using functional integral methods we determine new evolution equations satisfied by the joint response-excitation probability density function (PDF) associated with the stochastic solution to first-order nonlinear partial differential equations (PDEs). The theory is presented for both fully nonlinear and for quasilinear scalar PDEs subject to random boundary conditions, random initial conditions or random forcing terms. Particular applications are discussed for the classical linear and nonlinear advection equations and for the advection-reaction equation. By using a Fourier-Galerkin spectral method we obtain numerical solutions of the proposed response-excitation PDF equations. These numerical solutions are compared against those obtained by using more conventional statistical approaches such as probabilistic collocation and multi-element probabilistic collocation methods. It is found that the response-excitation approach yields accurate predictions of the statistical properties of the system. In addition, it allows to directly ascertain the tails of probabilistic distributions, thus facilitating the assessment of rare events and associated risks. The computational cost of the response-excitation method is order magnitudes smaller than the one of more conventional statistical approaches if the PDE is subject to high-dimensional random boundary or initial conditions. The question of high-dimensionality for evolution equations involving multidimensional joint response-excitation PDFs is also addressed.

  6. New evolution equations for the joint response-excitation probability density function of stochastic solutions to first-order nonlinear PDEs

    International Nuclear Information System (INIS)

    By using functional integral methods we determine new evolution equations satisfied by the joint response-excitation probability density function (PDF) associated with the stochastic solution to first-order nonlinear partial differential equations (PDEs). The theory is presented for both fully nonlinear and for quasilinear scalar PDEs subject to random boundary conditions, random initial conditions or random forcing terms. Particular applications are discussed for the classical linear and nonlinear advection equations and for the advection–reaction equation. By using a Fourier–Galerkin spectral method we obtain numerical solutions of the proposed response-excitation PDF equations. These numerical solutions are compared against those obtained by using more conventional statistical approaches such as probabilistic collocation and multi-element probabilistic collocation methods. It is found that the response-excitation approach yields accurate predictions of the statistical properties of the system. In addition, it allows to directly ascertain the tails of probabilistic distributions, thus facilitating the assessment of rare events and associated risks. The computational cost of the response-excitation method is order magnitudes smaller than the one of more conventional statistical approaches if the PDE is subject to high-dimensional random boundary or initial conditions. The question of high-dimensionality for evolution equations involving multidimensional joint response-excitation PDFs is also addressed.

  7. Deduction and Validation of an Eulerian-Eulerian Model for Turbulent Dilute Two-Phase Flows by Means of the Phase Indicator Function Disperse Elements Probability Density Function

    Institute of Scientific and Technical Information of China (English)

    SantiagoLain; RicardoAliod

    2000-01-01

    A statistical formalism overcoming some conceptual and practical difficulties arising in existing two-phase flow (2PHF) mathematical modelling has been applied to propose a model for dilute 2PHF turbulent flows.Phase interaction terms with a clear physical meaning enter the equations and the formalism provides some guidelines for the avoidance of closure assumptions or the rational approximation of these terms. Continuous phase averaged continuity, momentum, turbulent kinetic energy and turbulence dissipation rate equations have been rigorously and systematically obtained in a single step. These equations display a structure similar to that for single-phase flows.It is also assumed that dispersed phase dynamics is well described by a probability density function (pdf) equation and Eulerian continuity, momentum and fluctuating kinetic energy equations for the dispersed phase are deduced.An extension of the standard k-c turbulence model for the continuous phase is used. A gradient transport model is adopted for the dispersed phase fluctuating fluxes of momentum and kinetic energy at the non-colliding, large inertia limit. This model is then used to predict the behaviour of three axisymmetric turbulent jets of air laden with solid particles varying in size and concentration. Qualitative and quantitative numerical predictions compare reasonably well with the three different sets of experimental results, studying the influence of particle size, loading ratio and flow confinement velocity.

  8. 概率密度函数法研究重构吸引子的结构%Probability Density Function Method for Observing Reconstructed Attractor Structure

    Institute of Scientific and Technical Information of China (English)

    陆宏伟; 陈亚珠; 卫青

    2004-01-01

    Probability density function (PDF) method is proposed for analysing the structure of the reconstructed attractor in computing the correlation dimensions of RR intervals of ten normal old men.PDF contains important information about the spatial distribution of the phase points in the reconstructed attractor.To the best of our knowledge, it is the first time that the PDF method is put forward for the analysis of the reconstructed attractor structure.Numerical simulations demonstrate that the cardiac systems of healthy old men are about 6-6.5 dimensional complex dynamical systems.It is found that PDF is not symmetrically distributed when time delay is small, while PDF satisfies Gaussian distribution when time delay is big enough.A cluster effect mechanism is presented to explain this phenomenon.By studying the shape of PDFs, that the roles played by time delay are more important than embedding dimension in the reconstruction is clearly indicated.Results have demonstrated that the PDF method represents a promising numerical approach for the observation of the reconstructed attractor structure and may provide more information and new diagnostic potential of the analyzed cardiac system.

  9. Probability density function (Pdf) of daily rainfall depths by means of superstatistics of hydro-climatic fluctuations for African test cities

    Science.gov (United States)

    Topa, M. E.; De Paola, F.; Giugni, M.; Kombe, W.; Touré, H.

    2012-04-01

    The dynamic of hydro-climatic processes can fluctuate in a wide range of temporal scales. Such fluctuations are often unpredictable for ecosystems and the adaptation to these represent the great challenge for the survival and the stability of the species. An unsolved issue is how much these fluctuations of climatic variables to different temporal scales can influence the frequency and the intensity of the extreme events, and how much these events can modify the ecosystems life. It is by now widespread that an increment of the frequency and the intensity of the extreme events will represent one of the strongest characteristic of the global climatic change, with the greatest social and biotics implications (Porporato et al 2006). Recent field experiments (Gutshick and BassiriRad, 2003) and numerical analysis (Porporato et al 2004) have shown that the extreme events can generate not negligible consequences on organisms of water-limited ecosystems. Adaptation measures and species and ecosystems answers to the hydro-climatic variations, is therefore srongly interconnected to the probabilistic structure of these fluctuations. Generally the not-linear intermittent dynamic of a state variable z (a rainfall depth or the interarrival time between two storms), at short time scales (for example daily) is described by a probability density function (pdf), p (z|υ), where υ is the parameter of the distribution. If the same parameter υ varies so that the external forcing fluctuates at longer temporal scale, z reaches a new "local" equilibrium. When the temporal scale of the variation of υ is larger than the one of z, the probability distribution of z can be obtained as a overlapping of the temporary equlibria ("Superstatistic" approach), i.e.: p(z) = ∫ p(z|υ)·φ(υ)dυ (1) where p(z|υ) is the conditioned probability of z to υ, while φ(υ) is the pdf of υ (Beck, 2001; Benjamin and Cornell, 1970). The present work, carried out within FP7-ENV-2010 CLUVA (CLimate Change

  10. Probability in quantum mechanics

    Directory of Open Access Journals (Sweden)

    J. G. Gilson

    1982-01-01

    Full Text Available By using a fluid theory which is an alternative to quantum theory but from which the latter can be deduced exactly, the long-standing problem of how quantum mechanics is related to stochastic processes is studied. It can be seen how the Schrödinger probability density has a relationship to time spent on small sections of an orbit, just as the probability density has in some classical contexts.

  11. On the inclusive annihilation of polarized e+e--pair with two observed hadrons

    International Nuclear Information System (INIS)

    The general consideration of the inclusive annihilation of polarized e+e--pair with two observed hadrons in final state (e+e-→h1h2X) is carried out. The annihilation cross section is expressed in terms of five structure functions describing the transition γ*→h1h2X. The partial widths of the corresponding decay of a virtual photon for different polarizations of the photon are also introduced and the annihilation cross section is written through these widths. The density matrix of the virtual photon and its polarizational multipole moments are given as well

  12. Effect of positron-atom interactions on the annihilation gamma spectra of molecules

    CERN Document Server

    Green, D G; Wang, F; Gribakin, G F; Surko, C M

    2012-01-01

    Calculations of gamma spectra for positron annihilation on a selection of molecules, including methane and its fluoro-substitutes, ethane, propane, butane and benzene are presented. The annihilation gamma spectra characterise the momentum distribution of the electron-positron pair at the instant of annihilation. The contribution to the gamma spectra from individual molecular orbitals is obtained from electron momentum densities calculated using modern computational quantum chemistry density functional theory tools. The calculation, in its simplest form, effectively treats the low-energy (thermalised, room-temperature) positron as a plane wave and gives annihilation gamma spectra that are about 40% broader than experiment, although the main chemical trends are reproduced. We show that this effective "narrowing" of the experimental spectra is due to the action of the molecular potential on the positron, chiefly, due to the positron repulsion from the nuclei. It leads to a suppression of the contribution of smal...

  13. New Limits on Thermally annihilating Dark Matter from Neutrino Telescopes

    CERN Document Server

    Lopes, José

    2016-01-01

    We used a consistent and robust solar model to obtain upper limits placed by neutrino telescopes, such as Ice- Cube and Super-Kamiokande, on the Dark Matter-nucleon scattering cross-section, for a general model of Dark Matter with a velocity dependent (p-wave) thermally averaged cross-section. In this picture, the Boltzmann equation for the Dark Matter abundance is numerically solved satisfying the Dark Matter density measured from the Cosmic Microwave Background (CMB). We show that for lower cross-sections and higher masses, the Dark Matter annihilation rate drops sharply, resulting in upper bounds on the scattering cross-section one order of magnitude above those derived from a velocity independent (s-wave) annihilation cross-section. Our results show that upper limits on the scattering cross-section obtained from Dark Matter annihilating in the Sun are sensible to the uncertainty in current standard solar models, fluctuating a maximum of 20 % depending on the annihilation channel.

  14. Antiproton annihilation in quantum chromodynamics

    International Nuclear Information System (INIS)

    Anti-proton annihilation has a number of important advantages as a probe of QCD in the low energy domain. Exclusive reaction in which complete annihilation of the valance quarks occur. There are a number of exclusive and inclusive /bar p/ reactions in the intermediate momentum transfer domain which provide useful constraints on hadron wavefunctions or test novel features of QCD involving both perturbative and nonperturbative dynamics. Inclusive reactions involving antiprotons have the advantage that the parton distributions are well understood. In these lectures, I will particularly focus on lepton pair production /bar p/A → /ell//bar /ell//X as a means to understand specific nuclear features in QCD, including collision broadening, breakdown of the QCD ''target length condition''. Thus studies of low to moderate energy antiproton reactions with laboratory energies under 10 GeV could give further insights into the full structure of QCD. 112 refs., 40 figs

  15. Dark matter annihilation in the gravitational field of a black hole

    OpenAIRE

    Baushev, A. N.

    2008-01-01

    In this paper we consider dark matter particle annihilation in the gravitational field of black holes. We obtain exact distribution function of the infalling dark matter particles, and compute the resulting flux and spectra of gamma rays coming from the objects. It is shown that the dark matter density significantly increases near a black hole. Particle collision energy becomes very high affecting relative cross-sections of various annihilation channels. We also discuss possible experimental ...

  16. A Comprehensive Search for Dark Matter Annihilation in Dwarf Galaxies

    CERN Document Server

    Geringer-Sameth, Alex; Walker, Matthew G

    2014-01-01

    We present a new formalism designed to discover dark matter annihilation occurring in the Milky Way's dwarf galaxies. The statistical framework extracts all available information in the data by simultaneously combining observations of all the dwarf galaxies and incorporating the impact of particle physics properties, the distribution of dark matter in the dwarfs, and the detector response. The method performs maximally powerful frequentist searches and produces confidence limits on particle physics parameters. Probability distributions of test statistics under various hypotheses are constructed exactly, without relying on large sample approximations. The derived limits have proper coverage by construction and claims of detection are not biased by imperfect background modeling. We implement this formalism using data from the Fermi Gamma-ray Space Telescope to search for an annihilation signal in the complete sample of Milky Way dwarfs whose dark matter distributions can be reliably determined. We find that the...

  17. Quantum probability

    CERN Document Server

    Gudder, Stanley P

    2014-01-01

    Quantum probability is a subtle blend of quantum mechanics and classical probability theory. Its important ideas can be traced to the pioneering work of Richard Feynman in his path integral formalism.Only recently have the concept and ideas of quantum probability been presented in a rigorous axiomatic framework, and this book provides a coherent and comprehensive exposition of this approach. It gives a unified treatment of operational statistics, generalized measure theory and the path integral formalism that can only be found in scattered research articles.The first two chapters survey the ne

  18. Ruin probabilities

    DEFF Research Database (Denmark)

    Asmussen, Søren; Albrecher, Hansjörg

    , extensions of the classical compound Poisson model to allow for reserve-dependent premiums, Markov-modulation, periodicity, change of measure techniques, phase-type distributions as a computational vehicle and the connection to other applied probability areas, like queueing theory. In this substantially......The book gives a comprehensive treatment of the classical and modern ruin probability theory. Some of the topics are Lundberg's inequality, the Cramér-Lundberg approximation, exact solutions, other approximations (e.g., for heavy-tailed claim size distributions), finite horizon ruin probabilities...... updated and extended second version, new topics include stochastic control, fluctuation theory for Levy processes, Gerber–Shiu functions and dependence....

  19. Probability-1

    CERN Document Server

    Shiryaev, Albert N

    2016-01-01

    This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, the measure-theoretic foundations of probability theory, weak convergence of probability measures, and the central limit theorem. Many examples are discussed in detail, and there are a large number of exercises. The book is accessible to advanced undergraduates and can be used as a text for independent study. To accommodate the greatly expanded material in the third edition of Probability, the book is now divided into two volumes. This first volume contains updated references and substantial revisions of the first three chapters of the second edition. In particular, new material has been added on generating functions, the inclusion-exclusion principle, theorems on monotonic classes (relying on a detailed treatment of “π-λ” systems), and the fundamental theorems of mathematical statistics.

  20. Importance of non-local electron-positron correlations for positron annihilation characteristics in solids

    Energy Technology Data Exchange (ETDEWEB)

    Rubaszek, A. [Polska Akademia Nauk, Wroclaw (Poland). Inst. Niskich Temperatur i Badan Strukturalnych; Szotek, Z.; Temmerman, W.M. [Daresbury Lab., Warrington (United Kingdom)

    2001-07-01

    Several methods to describe the electron-positron (e-p) correlation effects are used in calculations of positron annihilation characteristics in solids. The weighted density approximation (WDA), giving rise to the non-local, state-selective e-p correlation functions, is applied to calculate positron annihilation rates and e-p momentum densities in a variety of metals and silicon. The WDA results are compared to the results of other methods such as the independent particle model, local density approximation, generalised gradient approximation, and also to experiments. The importance of non-locality and state-dependence of the e-p correlation functions is discussed. (orig.)

  1. Correlations between Positron Annihilation Parameters and Macroscopic Properties in Copolymers Belonging to Elastomers Group

    Science.gov (United States)

    Krzemień, K.; Kansy, J.

    2008-05-01

    The positron annihilation lifetime spectroscopy was used to study correlations between positron annihilation parameters and macroscopic properties in two kinds of polymers from elastomers group. Two kinds of material were investigated: three samples of ethylene octane copolymers (commercial name engage) of different densities and six samples of polybutylene terephtalate-polyether glycol copolymers (hytrel) having different densities. A correlation between intensity of ortho-positronium component and the density (d) of samples was observed for both kinds of material. From the ortho-positronium pick-off lifetime the mean radii (R) of free volume centers were determined. A good linear correlation between R and d was found.

  2. Ultra-high energy cosmic rays: The annihilation of super-heavy relics

    International Nuclear Information System (INIS)

    We investigate the possibility that ultra-high energy cosmic rays (UHECRs) originate from the annihilation of relic superheavy (SH) dark matter in the Galactic halo. In order to fit the data on UHECRs, a cross section of Aν> ∼ 10-26cm2(Mx/1012 GeV)((3)/(2)) is required if the SH dark matter follows a Navarro-Frenk-White (NFW) density profile. This would require extremely large-l contributions to the annihilation cross section. An interesting finding of our calculation is that the annihilation in sub-galactic clumps of dark matter dominates over the annihilations in the smooth dark matter halo, thus implying much smaller values of the cross section needed to explain the observed fluxes of UHECRs

  3. Dark Stars and Boosted Dark Matter Annihilation Rates

    CERN Document Server

    Ilie, Cosmin; Spolyar, Douglas

    2010-01-01

    Dark Stars (DS) may constitute the first phase of stellar evolution, powered by dark matter (DM) annihilation. We will investigate here the properties of DS assuming the DM particle has the required properties to explain the excess positron and elec- tron signals in the cosmic rays detected by the PAMELA and FERMI satellites. Any possible DM interpretation of these signals requires exotic DM candidates, with an- nihilation cross sections a few orders of magnitude higher than the canonical value required for correct thermal relic abundance for Weakly Interacting Dark Matter can- didates; additionally in most models the annihilation must be preferentially to lep- tons. Secondly, we study the dependence of DS properties on the concentration pa- rameter of the initial DM density profile of the halos where the first stars are formed. We restrict our study to the DM in the star due to simple (vs. extended) adiabatic contraction and minimal (vs. extended) capture; this simple study is sufficient to illustrate depend...

  4. High nuclear temperatures by antimatter-matter annihilation

    International Nuclear Information System (INIS)

    It is suggested that the quark-gluon phase be created through the use of antiproton or antideuteron beams. The first advantage to this method, using higher energy antiprotons than 1.5 GeV/c, is that the higher momenta antiprotons penetrate more deeply so that mesons produced are more nearly contained within the nucleus. Another advantage is that the annihilation products are very forward-peaked and tend to form a beam of mesons so that the energy density does not disperse very rapidly. Calculations were performed using the intranuclear cascade to try to follow the process of annihilation in some detail. The intranuclear cascade type calculation method is compared to the hydrodynamic approach. 8 refs., 8 figs

  5. High nuclear temperatures by antimatter-matter annihilation

    Energy Technology Data Exchange (ETDEWEB)

    Gibbs, W.R.; Strottman, D.

    1985-01-01

    It is suggested that the quark-gluon phase be created through the use of antiproton or antideuteron beams. The first advantage to this method, using higher energy antiprotons than 1.5 GeV/c, is that the higher momenta antiprotons penetrate more deeply so that mesons produced are more nearly contained within the nucleus. Another advantage is that the annihilation products are very forward-peaked and tend to form a beam of mesons so that the energy density does not disperse very rapidly. Calculations were performed using the intranuclear cascade to try to follow the process of annihilation in some detail. The intranuclear cascade type calculation method is compared to the hydrodynamic approach. 8 refs., 8 figs. (LEW)

  6. Dark matter distribution and annihilation at the Galactic center

    Science.gov (United States)

    Dokuchaev, V. I.; Eroshenko, Yu N.

    2016-02-01

    We describe a promising method for measuring the total dark matter mass near a supermassive black hole at the Galactic center based on observations of nonrelativistic precession of the orbits of fast S0 stars. An analytical expression for the precession angle has been obtained under the assumption of a power-law profile of the dark matter density. The awaited weighing of the dark matter at the Galactic center provides the strong constraints on the annihilation signal from the neuralino dark matter particle candidate. The mass of the dark matter necessary for the explanation of the observed excess of gamma-radiation owing to the annihilation of the dark matter particles has been calculated with allowance for the Sommerfeld effect.

  7. Gas Permeations Studied by Positron Annihilation

    Science.gov (United States)

    Yuan, Jen-Pwu; Cao, Huimin; Jean, X.; Yang, Y. C.

    1997-03-01

    The hole volumes and fractions of PC and PET polymers are measured by positron annihilation lifetime spectroscopy. Direct correlations between the measured hole properties and gas permeabilities are observed. Applications of positron annihilation spectroscopy to study gas transport and separation of polymeric materials will be discussed.

  8. Positron Annihilation in the Bipositronium Ps2

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Frolov, Alexei M.

    2005-07-01

    The electron-positron-pair annihilation in the bipositronium PS2 is considered. In particular, the two-, three-, one- and zero-photon annihilation rates are determined to high accuracy. The corresponding analytical expressions are also presented. Also, a large number of bound state properties have been determined for this system.

  9. Bright gamma-ray Galactic Center excess and dark dwarfs: Strong tension for dark matter annihilation despite Milky Way halo profile and diffuse emission uncertainties

    Science.gov (United States)

    Abazajian, Kevork N.; Keeley, Ryan E.

    2016-04-01

    We incorporate Milky Way dark matter halo profile uncertainties, as well as an accounting of diffuse gamma-ray emission uncertainties in dark matter annihilation models for the Galactic Center Extended gamma-ray excess (GCE) detected by the Fermi Gamma Ray Space Telescope. The range of particle annihilation rate and masses expand when including these unknowns. However, two of the most precise empirical determinations of the Milky Way halo's local density and density profile leave the signal region to be in considerable tension with dark matter annihilation searches from combined dwarf galaxy analyses for single-channel dark matter annihilation models. The GCE and dwarf tension can be alleviated if: one, the halo is very highly concentrated or strongly contracted; two, the dark matter annihilation signal differentiates between dwarfs and the GC; or, three, local stellar density measures are found to be significantly lower, like that from recent stellar counts, increasing the local dark matter density.

  10. Weak annihilation cusp inside the dark matter spike about a black hole

    Science.gov (United States)

    Shapiro, Stuart L.; Shelton, Jessie

    2016-06-01

    We reinvestigate the effect of annihilations on the distribution of collisionless dark matter (DM) in a spherical density spike around a massive black hole. We first construct a very simple, pedagogic, analytic model for an isotropic phase space distribution function that accounts for annihilation and reproduces the "weak cusp" found by Vasiliev for DM deep within the spike and away from its boundaries. The DM density in the cusp varies as r-1 /2 for s -wave annihilation, where r is the distance from the central black hole, and is not a flat "plateau" profile. We then extend this model by incorporating a loss cone that accounts for the capture of DM particles by the hole. The loss cone is implemented by a boundary condition that removes capture orbits, resulting in an anisotropic distribution function. Finally, we evolve an initial spike distribution function by integrating the Boltzmann equation to show how the weak cusp grows and its density decreases with time. We treat two cases, one for s -wave and the other for p -wave DM annihilation, adopting parameters characteristic of the Milky Way nuclear core and typical WIMP models for DM. The cusp density profile for p -wave annihilation is weaker, varying like ˜r-0.34, but is still not a flat plateau.

  11. New techniques of positron annihilation

    International Nuclear Information System (INIS)

    Studies on new techniques of positron annihilation and its application to various fields are presented. First, production of slow positron and its characteristic features are described. The slow positron can be obtained from radioisotopes by using a positron moderator, proton beam bombardment on a boron target, and pair production by using an electron linear accelerator. Bright enhancement of the slow positron beam is studied. Polarized positron beam can be used for the study of the momentum distribution of an electron in ferromagnetic substances. Production of polarized positrons and measurements of polarization are discussed. Various phases of interaction between slow positrons and atoms (or molecules) are described. A comparative study of electron scavenging effects on luminescence and on positronium formation in cyclohexane is presented. The positron annihilation phenomena are applicable for the surface study. The microscopic information on the surface of porous material may be obtained. The slow positrons are also useful for the surface study. Production and application of slow muon (positive and negative) are presented in this report. (Kato, T.)

  12. Skyrmion creation and annihilation by spin waves

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yizhou, E-mail: yliu062@ucr.edu; Yin, Gen; Lake, Roger K., E-mail: rlake@ece.ucr.edu [Department of Electrical and Computer Engineering, University of California, Riverside, California 92521 (United States); Zang, Jiadong [Department of Physics and Material Science Program, University of New Hampshire, Durham, New Hampshire 03824 (United States); Shi, Jing [Department of Physics and Astronomy, University of California, Riverside, California 92521 (United States)

    2015-10-12

    Single skyrmion creation and annihilation by spin waves in a crossbar geometry are theoretically analyzed. A critical spin-wave frequency is required both for the creation and the annihilation of a skyrmion. The minimum frequencies for creation and annihilation are similar, but the optimum frequency for creation is below the critical frequency for skyrmion annihilation. If a skyrmion already exists in the cross bar region, a spin wave below the critical frequency causes the skyrmion to circulate within the central region. A heat assisted creation process reduces the spin-wave frequency and amplitude required for creating a skyrmion. The effective field resulting from the Dzyaloshinskii-Moriya interaction and the emergent field of the skyrmion acting on the spin wave drive the creation and annihilation processes.

  13. Fermionic Semi-Annihilating Dark Matter

    CERN Document Server

    Cai, Yi

    2015-01-01

    Semi-annihilation is a generic feature of dark matter theories with symmetries larger than Z2. We investigate two examples with multi-component dark sectors comprised of an SU(2)L singlet or triplet fermion besides a scalar singlet. These are respectively the minimal fermionic semi-annihilating model, and the minimal case for a gauge-charged fermion. We study the relevant dark matter phenomenology, including the interplay of semi-annihilation and the Sommerfeld effect. We demonstrate that semi-annihilation in the singlet model can explain the gamma ray excess from the galactic center. For the triplet model we scan the parameter space, and explore how signals and constraints are modified by semi-annihilation. We find that the entire region where the model comprises all the observed dark matter is accessible to current and planned direct and indirect searches.

  14. Nature of chemical bond through positron annihilation

    International Nuclear Information System (INIS)

    Positron annihilation is an important alternative to Compton scattering for determination of electron momentum distribution. The possibility of studying the nature of chemical bond by positron annihilation technique is reviewed in this paper. General concepts connected with momentum space and chemical bond have been outlined. Estimation of positron wavefunction at carbon and hydrogen sites and the calculation of electron momentum distribution of C-H and C-C bonds are discussed. The annihilation with sigma electrons broadens the angular correlation curve while the annihilation with π electrons narrows the curve. The most significant part of this paper is the investigation of participation of d-orbital of sulphur in chemical bonding. Whether or not ligand perturbation is necessary for d-orbital contraction and consequent participation in bonding is controversial till now. A study of angular correlation of positron annihilation radiation on organic sulphides and sulphones is a direct evidence to conclude that ligand perturbation is necessary. (author)

  15. Application of tests of goodness of fit in determining the probability density function for spacing of steel sets in tunnel support system

    Directory of Open Access Journals (Sweden)

    Farnoosh Basaligheh

    2015-12-01

    Full Text Available One of the conventional methods for temporary support of tunnels is to use steel sets with shotcrete. The nature of a temporary support system demands a quick installation of its structures. As a result, the spacing between steel sets is not a fixed amount and it can be considered as a random variable. Hence, in the reliability analysis of these types of structures, the selection of an appropriate probability distribution function of spacing of steel sets is essential. In the present paper, the distances between steel sets are collected from an under-construction tunnel and the collected data is used to suggest a proper Probability Distribution Function (PDF for the spacing of steel sets. The tunnel has two different excavation sections. In this regard, different distribution functions were investigated and three common tests of goodness of fit were used for evaluation of each function for each excavation section. Results from all three methods indicate that the Wakeby distribution function can be suggested as the proper PDF for spacing between the steel sets. It is also noted that, although the probability distribution function for two different tunnel sections is the same, the parameters of PDF for the individual sections are different from each other.

  16. Size-dependent momentum smearing effect of positron annihilation radiation in embedded nano Cu clusters

    International Nuclear Information System (INIS)

    Momentum density distributions determined by the analysis of positron annihilation radiation in embedded nano Cu clusters in iron were studied by using a first-principles method. A momentum smearing effect originated from the positron localization in the embedded clusters is observed. The smearing effect is found to scale linearly with the cube root of the cluster's volume, indicating that the momentum density techniques of positron annihilation can be employed to explore the atomic-scaled microscopic structures of a variety of impurity aggregations in materials.

  17. Positron annihilation in transparent ceramics

    Science.gov (United States)

    Husband, P.; Bartošová, I.; Slugeň, V.; Selim, F. A.

    2016-01-01

    Transparent ceramics are emerging as excellent candidates for many photonic applications including laser, scintillation and illumination. However achieving perfect transparency is essential in these applications and requires high technology processing and complete understanding for the ceramic microstructure and its effect on the optical properties. Positron annihilation spectroscopy (PAS) is the perfect tool to study porosity and defects. It has been applied to investigate many ceramic structures; and transparent ceramics field may be greatly advanced by applying PAS. In this work positron lifetime (PLT) measurements were carried out in parallel with optical studies on yttrium aluminum garnet transparent ceramics in order to gain an understanding for their structure at the atomic level and its effect on the transparency and light scattering. The study confirmed that PAS can provide useful information on their microstructure and guide the technology of manufacturing and advancing transparent ceramics.

  18. AC quantum efficiency harmonic analysis of exciton annihilation in organic light emitting diodes (Presentation Recording)

    Science.gov (United States)

    Giebink, Noel C.

    2015-10-01

    Exciton annihilation processes impact both the lifetime and efficiency roll-off of organic light emitting diodes (OLEDs), however it is notoriously difficult to identify the dominant mode of annihilation in operating devices (exciton-exciton vs. exciton-charge carrier) and subsequently to disentangle its magnitude from competing roll-off processes such as charge imbalance. Here, we introduce a simple analytical method to directly identify and extract OLED annihilation rates from standard light-current-voltage (LIV) measurement data. The foundation of this approach lies in a frequency domain EQE analysis and is most easily understood in analogy to impedance spectroscopy, where in this case both the current (J) and electroluminescence intensity (L) are measured using a lock-in amplifier at different harmonics of the sinusoidal dither superimposed on the DC device bias. In the presence of annihilation, the relationship between recombination current and light output (proportional to exciton density) becomes nonlinear, thereby mixing the different EQE harmonics in a manner that depends uniquely on the type and magnitude of annihilation. We derive simple expressions to extract different annihilation rate coefficients and apply this technique to a variety of OLEDs. For example, in devices dominated by triplet-triplet annihilation, the annihilation rate coefficient, K_TT, is obtained directly from the linear slope that results from plotting EQE_DC-EQE_1ω versus L_DC (2EQE_1ω-EQE_DC). We go on to show that, in certain cases it is sufficient to calculate EQE_1ω directly from the slope of the DC light versus current curve [i.e. via (dL_DC)/(dJ_DC )], thus enabling this analysis to be conducted solely from common LIV measurement data.

  19. Positron annihilation lifetime study of oxide dispersion strengthened steels

    Science.gov (United States)

    Krsjak, V.; Szaraz, Z.; Hähner, P.

    2012-09-01

    A comparative positron annihilation lifetime study has been performed on various commercial ferritic and ferritic/martensitic oxide dispersion strengthened (ODS) steels. Both as-extruded and recrystallized materials were investigated. In the materials with recrystallized coarse-grained microstructures, only the positron trapping at small vacancy clusters and yttria nanofeatures was observed. Materials which had not undergone recrystallization treatment clearly showed additional positron trapping which is associated with dislocations. Dislocation densities were calculated from a two-component decomposition of the positron lifetime spectra by assuming the first component to be a superposition of the bulk controlled annihilation rate and the dislocation controlled trapping rate. The second component (which translates into lifetimes of 240-260 ps) was found to be well separated in all those ODS materials. This paper presents the potentialities and limitations of the positron annihilation lifetime spectroscopy, and discusses the results of the experimental determination of the defect concentrations and sensitivity of this technique to the material degradation due to thermally induced precipitation of chromium-rich α' phases.

  20. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    Science.gov (United States)

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. PMID:27154008

  1. An automated technique for most-probable-number (MPN) analysis of densities of phagotrophic protists with lux-AB labelled bacteria as growth medium

    DEFF Research Database (Denmark)

    Ekelund, Flemming; Christensen, Søren; Rønn, Regin; Buhl, E.; Jacobsen, C.S.

    1999-01-01

    An automated modification of the most-probable-number (MPN) technique has been developed for enumeration of phagotrophic protozoa. The method is based on detection of prey depletion in micro titre plates rather than on presence of protozoa. A transconjugant Pseudomonas fluorescens DR54 labelled...... with a luxAB gene cassette was constructed, and used as growth medium for the protozoa in the micro titre plates. The transconjugant produced high amounts of luciferase which was stable and allowed detection for at least 8 weeks. Dilution series of protozoan cultures and soil suspensions were...

  2. Ratio of secondary baryon and meson yields in e+e- annihilation and quark combinatorics

    International Nuclear Information System (INIS)

    Shown is possibility of application of quark combinatoric counting to check probability of separated out quark transfer into baryons or mesons at joining quark with the sea of quark-antiquark couples. It is shown that p/π+ outlet ratio, calculated in the framework of the quark combinatorics, is coordinated with the data in antiproton and pion birth in the process of e+e- - annihilation. In the authors opinion, quark combinatoric counting forecasts large cross section of Ω- hyperon birth and to check quark combinatorics the study of baryon outlets ratios in e+e- annihilation is necessary

  3. Nonabelian dark matter with resonant annihilation

    International Nuclear Information System (INIS)

    We construct a model based on an extra gauge symmetry, SU(2)X×U(1)B−L, which can provide gauge bosons to serve as weakly-interacting massive particle dark matter. The stability of the dark matter is naturally guaranteed by a discrete Z2 symmetry that is a subgroup of SU(2)X. The dark matter interacts with standard model fermions by exchanging gauge bosons which are linear combinations of SU(2)X×U(1)B−L gauge bosons. With the appropriate choice of representation for the new scalar multiplet whose vacuum expectation value spontaneously breaks the SU(2)X symmetry, the relation between the new gauge boson masses can naturally lead to resonant pair annihilation of the dark matter. After exploring the parameter space of the new gauge couplings subject to constraints from collider data and the observed relic density, we use the results to evaluate the cross section of the dark matter scattering off nucleons and compare it with data from the latest direct detection experiments. We find allowed parameter regions that can be probed by future direct searches for dark matter and LHC searches for new particles

  4. Positron annihilation in solid and liquid Ni

    International Nuclear Information System (INIS)

    New techniques have been developed for the study of metals via positron annihilation which provide for the in-situ melting of the samples and subsequent measurements via Doppler broadening of positron-annihilation radiation. Here we report these metods currently in use at our laboratory; ion implantation of 58Co and the use of Al2O3 crucibles for in-situ melting followed by the decomposition of the Doppler-broadened spectrum into a parabolic and a Gaussian component. Our earliest results obtained for pure Ni in the polycrystalline solid and in the liquid state are compared. An interesting similarity is reported for the distributions of the high-momentum (Gaussian) component for positrons annihilating in vacancies at high temperatures and those annihilating in liquid Ni

  5. QCD in e+e- annihilation

    International Nuclear Information System (INIS)

    The promise of e+e- annihilation as an ideal laboratory to test Quantum Chromodynamics, QCD, has been the dominating theme in elementary particle physics during the last several years. An attempt is made to partially survey the subject in deep perturbative region in e+e- annihilation where theoretical ambiguities are minimal. Topics discussed include a review of the renormalization group methods relevant for e+e- annihilation, total hadronic cross section, jets and large-psub(T) phenomena, non-perturbative quark and gluon fragmentation effects and analysis of the jet distributions measured at DORIS, SPEAR and PETRA. My hope is to review realistic tests of QCD in e+e- annihilation - as opposed to the ultimate tests, which abound in literature. (orig.)

  6. Compton Scattering, Pair Annihilation and Pair Production in a Plasma

    OpenAIRE

    Krishan, Vinod

    1999-01-01

    The square of the four momentum of a photon in vacuum is zero. However, in an unmagnetized plasma it is equal to the square of the plasma frequency. Further, the electron-photon coupling vertex is modified in a plasma to include the effect of the plasma medium. I calculate the cross sections of the three processes - the Compton scattering, electron-positron pair annihilation and production in a plasma. At high plasma densities, the cross sections are found to change significantly. Such high p...

  7. Positron annihilation with core and valence electrons

    CERN Document Server

    Green, D G

    2015-01-01

    $\\gamma$-ray spectra for positron annihilation with the core and valence electrons of the noble gas atoms Ar, Kr and Xe is calculated within the framework of diagrammatic many-body theory. The effect of positron-atom and short-range positron-electron correlations on the annihilation process is examined in detail. Short-range correlations, which are described through non-local corrections to the vertex of the annihilation amplitude, are found to significantly enhance the spectra for annihilation on the core orbitals. For Ar, Kr and Xe, the core contributions to the annihilation rate are found to be 0.55\\%, 1.5\\% and 2.2\\% respectively, their small values reflecting the difficulty for the positron to probe distances close to the nucleus. Importantly however, the core subshells have a broad momentum distribution and markedly contribute to the annihilation spectra at Doppler energy shifts $\\gtrsim3$\\,keV, and even dominate the spectra of Kr and Xe at shifts $\\gtrsim5$\\,keV. Their inclusion brings the theoretical ...

  8. The dark matter annihilation boost from low-temperature reheating

    Science.gov (United States)

    Erickcek, Adrienne L.

    2015-11-01

    The evolution of the Universe between inflation and the onset of big bang nucleosynthesis is difficult to probe and largely unconstrained. This ignorance profoundly limits our understanding of dark matter: we cannot calculate its thermal relic abundance without knowing when the Universe became radiation dominated. Fortunately, small-scale density perturbations provide a probe of the early Universe that could break this degeneracy. If dark matter is a thermal relic, density perturbations that enter the horizon during an early matter-dominated era grow linearly with the scale factor prior to reheating. The resulting abundance of substructure boosts the annihilation rate by several orders of magnitude, which can compensate for the smaller annihilation cross sections that are required to generate the observed dark matter density in these scenarios. In particular, thermal relics with masses less than a TeV that thermally and kinetically decouple prior to reheating may already be ruled out by Fermi-LAT observations of dwarf spheroidal galaxies. Although these constraints are subject to uncertainties regarding the internal structure of the microhalos that form from the enhanced perturbations, they open up the possibility of using gamma-ray observations to learn about the reheating of the Universe.

  9. Collision Probability Analysis

    DEFF Research Database (Denmark)

    Hansen, Peter Friis; Pedersen, Preben Terndrup

    1998-01-01

    It is the purpose of this report to apply a rational model for prediction of ship-ship collision probabilities as function of the ship and the crew characteristics and the navigational environment for MS Dextra sailing on a route between Cadiz and the Canary Islands.The most important ship and crew...... characteristics are: ship speed, ship manoeuvrability, the layout of the navigational bridge, the radar system, the number and the training of navigators, the presence of a look out etc. The main parameters affecting the navigational environment are ship traffic density, probability distributions of wind speeds...... probability, i.e. a study of the navigator's role in resolving critical situations, a causation factor is derived as a second step.The report documents the first step in a probabilistic collision damage analysis. Future work will inlcude calculation of energy released for crushing of structures giving a...

  10. Probability density fittings of corrosion test-data: Implications on C6H15NO3 effectiveness on concrete steel-rebar corrosion

    Indian Academy of Sciences (India)

    Joshua Olusegun Okeniyi; Idemudia Joshua Ambrose; Stanley Okechukwu Okpala; Oluwafemi Michael Omoniyi; Isaac Oluwaseun Oladele; Cleophas Akintoye Loto; Patricia Abimbola Idowu Popoola

    2014-06-01

    In this study, corrosion test-data of steel-rebar in concrete were subjected to the fittings of the Normal, Gumbel and the Weibull probability distribution functions. This was done to investigate the suitability of the results of the fitted test-data, by these distributions, for modelling the effectiveness of C6H15NO3, triethanolamine (TEA), admixtures on the corrosion of steel-rebar in concrete in NaCl and in H2SO4 test-media. For this, six different concentrations of TEA were admixed in replicates of steel-reinforced concrete samples which were immersed in the saline/marine and the microbial/industrial simulating test-environments for seventy-five days. From these, distribution fittings of the non-destructive electrochemical measurements were subjected to the Kolmogorov–Smirnov goodness-of-fit statistics and to the analyses of variance modelling for studying test-data compatibility to the fittings and testing significance. Although all fittings of test-data followed similar trends of significant testing, the fittings of the corrosion rate test data followed the Weibull more than the Normal and the Gumbel distribution fittings, thus supporting use of the Weibull fittings for modelling effectiveness. The effectiveness models on rebar corrosion, based on these, identified 0.083% TEA with optimal inhibition efficiency, $\\eta =$ 72.17 ± 10.68%, in NaCl medium while 0.667% TEA was the only admixture with positive effectiveness, $\\eta =$ 56.45±15.85%, in H2SO4 medium. These results bear implications on the concentrations of TEA for effective corrosion protection of concrete steel-rebar in saline/marine and in industrial/microbial environments.

  11. Particle creation and annihilation at interior boundaries: One-dimensional models

    CERN Document Server

    Keppeler, Stefan

    2015-01-01

    We describe creation and annihilation of particles at external sources in one spatial dimension in terms of interior-boundary conditions (IBCs). We derive explicit solutions for spectra, (generalised) eigenfunctions, as well as Green functions, spectral determinants, and integrated spectral densities. Moreover, we introduce a quantum graph version of IBC-Hamiltonians.

  12. Particle creation and annihilation at interior boundaries: one-dimensional models

    Science.gov (United States)

    Keppeler, Stefan; Sieber, Martin

    2016-03-01

    We describe creation and annihilation of particles at external sources in one spatial dimension in terms of interior-boundary conditions (IBCs). We derive explicit solutions for spectra, (generalised) eigenfunctions, as well as Green functions, spectral determinants, and integrated spectral densities. Moreover, we introduce a quantum graph version of IBC-Hamiltonians.

  13. Positron Annihilation Technique is a Powerful Nuclear Technique in Material Sciences

    International Nuclear Information System (INIS)

    Positron Annihilation Doppler Broadening Spectroscopy (PADPS) is a nondestructive technique used in material science. Electrical measurements are one of the oldest techniques used also in material science. This paper aimed to discuss the availability of using both PADPS and electrical measurements as diagnostic techniques to detect the defects in a set of plastically deformed 5454 wrought aluminum alloy. The results of the positron annihilation measurements and the electrical measurements were analyzed in terms of the two-state trapping model. This model can be used to investigate both defect and dislocation densities of the samples under investigation. Results obtained by both nuclear and electrical techniques have been reported

  14. Positron annihilation lifetime characterization of oxygen ion irradiated rutile TiO2

    Science.gov (United States)

    Luitel, Homnath; Sarkar, A.; Chakrabarti, Mahuya; Chattopadhyay, S.; Asokan, K.; Sanyal, D.

    2016-07-01

    Ferromagnetic ordering at room temperature has been induced in rutile phase of TiO2 polycrystalline sample by O ion irradiation. 96 MeV O ion induced defects in rutile TiO2 sample has been characterized by positron annihilation spectroscopic techniques. Positron annihilation results indicate the formation of cation vacancy (VTi, Ti vacancy) in these irradiated TiO2 samples. Ab initio density functional theoretical calculations indicate that in TiO2 magnetic moment can be induced either by creating Ti or O vacancies.

  15. Enhanced Dark Matter Annihilation Rate for Positron and Electron Excesses from Q-ball Decay

    OpenAIRE

    McDonald, John

    2009-01-01

    We show that Q-ball decay in Affleck-Dine baryogenesis models can account for dark matter when the annihilation cross-section is sufficiently enhanced to explain the positron and electron excesses observed by PAMELA, ATIC and PPB-BETS. For Affleck-Dine baryogenesis along a d = 6 flat direction, the reheating temperature is approximately 30 GeV and the Q-ball decay temperature is in the range 10-100 MeV. The LSPs produced by Q-ball decay annihilate down to the observed dark matter density if t...

  16. Decaying vs annihilating dark matter in light of a tentative gamma-ray line

    Energy Technology Data Exchange (ETDEWEB)

    Buchmueller, Wilfried; Garny, Mathias

    2012-06-15

    Recently reported tentative evidence for a gamma-ray line in the Fermi-LAT data is of great potential interest for identifying the nature of dark matter. We compare the implications for decaying and annihilating dark matter taking the constraints from continuum gamma-rays, antiproton flux and morphology of the excess into account. We find that higgsino and wino dark matter are excluded, also for nonthermal production. Generically, the continuum gamma-ray ux severely constrains annihilating dark matter. Consistency of decaying dark matter with the spatial distribution of the Fermi-LAT excess would require an enhancement of the dark matter density near the Galactic center.

  17. Revisiting the constraints on annihilating dark matter by the radio observational data of M31

    Science.gov (United States)

    Chan, Man Ho

    2016-07-01

    Recent gamma-ray observations and radio observations put strong constraints on the parameters of dark matter annihilation. In this article, we derive new constraints for six standard model annihilation channels by using the recent radio data of the M31 galaxy. The new constraints are generally tighter than the constraints obtained from 6 years of Fermi Large Area Telescope gamma-ray observations of the Milky Way dwarf spheroidal satellite galaxies. The conservative lower limits of dark matter mass annihilating via b b ¯, μ+μ- and τ+τ- channels are 90, 90 and 80 GeV respectively with the canonical thermal relic cross section and the Burkert profile being the dark matter density profile. Hence, our results do not favor the most popular models of the dark matter interpretation of the Milky Way GeV gamma-ray excess.

  18. Revisiting the constraints on annihilating dark matter by radio observational data of M31

    CERN Document Server

    Chan, Man Ho

    2016-01-01

    Recent gamma-ray observations and radio observations put strong constraints on the parameters of dark matter annihilation. In this article, we derive new constraints for six standard model annihilation channels by using the recent radio data of M31 galaxy. The new constraints are generally tighter than the constraints obtained from 6 years of Fermi Large Area Telescope gamma-ray observations of the Milky Way dwarf spheroidal satellite galaxies. The conservative lower limits of dark matter mass annihilating via $b\\bar{b}$, $\\mu^+\\mu^-$ and $\\tau^+\\tau^-$ channels are 90 GeV, 90 GeV and 80 GeV respectively with the canonical thermal relic cross section and the Burkert profile being the dark matter density profile. Hence, our results do not favor the most popular models of the dark matter interpretation of the Milky Way GeV gamma-ray excess.

  19. Observational Constraints of 30–40 GeV Dark Matter Annihilation in Galaxy Clusters

    Directory of Open Access Journals (Sweden)

    Man Ho Chan

    2016-01-01

    Full Text Available Recently, it has been shown that the annihilation of 30–40 GeV dark matter particles through bb- channel can satisfactorily explain the excess GeV gamma-ray spectrum near the Galactic Center. In this paper, we apply the above model to galaxy clusters and use the latest upper limits of gamma-ray flux derived from Fermi-LAT data to obtain an upper bound of the annihilation cross section of dark matter. By considering the extended density profiles and the cosmic ray profile models of 49 galaxy clusters, the upper bound of the annihilation cross section can be further tightened to σv≤9×10-26 cm3 s−1. This result is consistent with the one obtained from the data near the Galactic Center.

  20. W-WIMP Annihilation as a Source of the Fermi Bubbles

    CERN Document Server

    Anchordoqui, Luis Alfredo

    2013-01-01

    The Fermi Gamma-ray Space Telescope discovered two \\gamma-ray emitting bubble-shaped structures that extend nearly symmetrically on either side of our Galaxy and appear morphologically connected to the Galactic Center. The origin of the emission is still not clearly understood. It was recently shown that the spectral shape of the emission from the Fermi Bubbles is well described by an approximately 10 GeV dark matter particle annihilating to \\tau^+ \\tau^-, with a normalization corresponding to a velocity average annihilation cross section of \\langle \\sigma v \\rangle \\sim 2 \\times 10^{-27} cm^3/s. We study the nominal hidden sector recently introduced by Weinberg and examine to which extent its weakly-interacting massive particles (W-WIMPs) are capable of accommodating both the desired effective annihilation rate into tau leptons and the observed relic density.

  1. SUSY Implications from WIMP Annihilation into Scalars at the Galactic Center

    CERN Document Server

    Medina, Anibal D

    2015-01-01

    An excess in $\\gamma$-rays emanating from the galactic centre has recently been observed in the Fermi-LAT data. We investigate the new exciting possibility of fitting the signal spectrum by dark matter annihilating dominantly to a Higgs-pseudoscalar pair. We show that the fit to the $\\gamma$-ray excess for the Higgs-pseudoscalar channel can be just as good as for annihilation into bottom-quark pairs. This channel arises naturally in a full model such as the next-to-minimal supersymmetric Standard Model (NMSSM) and we find regions where dark matter relic density, the $\\gamma$-ray signal and other experimental constraints, can all be satisfied simultaneously. Annihilation into scalar pairs allows for the possibility of detecting the Higgs or pseudoscalar decay into two photons, providing a smoking-gun signal of the model.

  2. Pair annihilation radiation from relativistic jets in gamma-ray blazars

    CERN Document Server

    Böttcher, M

    1995-01-01

    The contribution of the pair annihilation process in relativistic electron-positron jets to the gamma-ray emission of blazars is calculated. Under the same assumptions as for the calculation of the yield of inverse Compton scattered accretion disk radiation (Dermer and Schlickeiser 1993) we calculate the emerging pair annihilation radiation taking into account all spectral broadening effects due to the energy spectra of the annihilating particles and the bulk motion of the jet. It is shown that the time-integrated pair annihilation spectrum appears almost like the well-known gamma-ray spectrum from decaying \\pi^o-mesons at rest, yielding a broad bumpy feature located between 50 and 100 MeV. We also demonstrate that for pair densities > 10^9 cm^{-3} in the jet the annihilation radiation will dominate the inverse Compton radiation, and indeed may explain reported spectral bumps at MeV energies. The refined treatment of the inverse Compton radiation leads to spectral breaks of the inverse Compton emission in the...

  3. SUSY-QCD corrections to the (co)annihilation of neutralino dark matter within the MSSM

    Energy Technology Data Exchange (ETDEWEB)

    Meinecke, Moritz

    2015-06-15

    Based on experimental observations, it is nowadays assumed that a large component of the matter content in the universe is comprised of so-called cold dark matter. Furthermore, latest measurements of the temperature fluctuations of the cosmic microwave background provided an estimation of the dark matter relic density at a measurement error of one percent (concerning the experimental 1σ-error). The lightest neutralino χ 0{sub 1}, a particle which subsumes under the phenomenologically interesting category of weakly interacting massive particles, is a viable dark matter candidate for many supersymmetric (SUSY) models whose relic density Ω{sub χ} {sub 0{sub 1}} happens to lie quite naturally within the experimentally favored ballpark of dark matter. The high experimental precision can be used to constrain the SUSY parameter space to its cosmologically favored regions and to pin down phenomenologically interesting scenarios. However, to actually benefit from this progress on the experimental side it is also mandatory to minimize the theoretical uncertainties. An important quantity within the calculation of the neutralino relic density is the thermally averaged sum over different annihilation and coannihilation cross sections of the neutralino and further supersymmetric particles. It is now assumed and also partly proven that these cross sections can be subject to large loop corrections which can even shift the associated Ω{sub χ} {sub 0{sub 1}} by a factor larger than the current experimental error. However, most of these corrections are yet unknown. In this thesis, we calculate higher-order corrections for some of the most important (co)annihilation channels both within the framework of the R-parity conserving Minimal Supersymmetric Standard Model (MSSM) and investigate their impact on the final neutralino relic density Ω{sub χ} {sub 0{sub 1}}. More precisely, this work provides the full O(α{sub s}) corrections of supersymmetric quantum chromodynamics (SUSY

  4. Propensity, Probability, and Quantum Theory

    Science.gov (United States)

    Ballentine, Leslie E.

    2016-08-01

    Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.

  5. Antinucleon nucleon annihilations into two mesons

    International Nuclear Information System (INIS)

    We study two aspects of the antinucleon-nucleon annihilation into two mesons (antiNN → M1M2), starting from simple Born diagrams. On one hand, we discuss the possibility of modelling the antiNN optical potential with the box diagrams related to the M1M2 channels. We include the lightest pseudoscalar, scalar and vector mesons with effective coupling constants. Much more channels appear to be needed in order to achieve sensible results. On the other hand, we show that a simple phenomenological optical potential, successfull in reproducing antiNN elastic scattering and total annihilation data can be further used to make predictions on the antiNN → M1M2 processes, which prove to be in good agreement with experiment. We find a lower bound of 17% on the relative contribution of these reactions to the antiNN annihilation. Also, the model favours a rather small effective radius for the nucleon

  6. Detecting electron neutrinos from solar dark matter annihilation by JUNO

    OpenAIRE

    Guo, Wan-Lei

    2015-01-01

    We explore the electron neutrino signals from light dark matter (DM) annihilation in the Sun for the large liquid scintillator detector JUNO. In terms of the spectrum features of three typical DM annihilation channels $\\chi \\chi \\rightarrow \

  7. Onset of exciton-exciton annihilation in single layer black phosphorus

    OpenAIRE

    Surrente, A.; Mitioglu, A. A.; Galkowski, K.; Klopotowski, L.; Tabis, W.; Vignolle, B.; Maude, D. K.; Plochocka, P.

    2016-01-01

    The exciton dynamics in monolayer black phosphorus is investigated over a very wide range of photoexcited exciton densities using time resolved photoluminescence. At low excitation densities, the exciton dynamics is successfully described in terms of a double exponential decay. With increasing exciton population, a fast, non-exponential component develops as exciton-exciton annihilation takes over as the dominant recombination mechanism under high excitation conditions. Our results identify a...

  8. Black Holes as Dark Matter Annihilation 'Boosters'

    International Nuclear Information System (INIS)

    The presence and growth of Intermediate and Supermassive Black Holes modify the surrounding distribution of stars and Dark Matter, and inevitably affect the prospects for indirectly detecting Dark Matter through its annihilation products. We show here that under specific circumstances, Black Holes can act as Dark Matter annihilation 'boosters'. In particular, we show that mini-spikes, i.e. Dark Matter overdensities around Intermediate-Mass Black Holes, would be bright sources of gamma-rays, well within the reach of the space telescope GLAST, that can be discriminated from ordinary astrophysical sources thanks to their peculiar energy spectrum and spatial distribution

  9. Antiproton-proton annihilation at rest into two mesons

    International Nuclear Information System (INIS)

    Branching ratios for antiproton-proton annihilations at rest into two mesons are given. The data were obtained at LEAR by stopping antiprotons in a liquid hydrogen target. Both charged and neutral annihilation products were detected in the Crystal Barrel detector. Representative data are presented, and their bearing on the general picture of annihilation dynamics is discussed. In addition, preliminary branching ratios for two-body radiative annihilations are given. (orig.)

  10. Electronic Structure of Rare-Earth Metals. II. Positron Annihilation

    DEFF Research Database (Denmark)

    Williams, R. W.; Mackintosh, Allan

    1968-01-01

    The angular correlation of the photons emitted when positrons annihilate with electrons has been studied in single crystals of the rare-earth metals Y, Gd, Tb, Dy, Ho, and Er, and in a single crystal of an equiatomic alloy of Ho and Er. A comparison of the results for Y with the calculations...... of Loucks shows that the independent-particle model gives a good first approximation to the angular distribution, although correlation effects probably smear out some of the structure. The angular distributions from the heavy rare-earth metals are very similar to that from Y and can be understood...... qualitatively in terms of the relativistic augmented-plane-wave calculations by Keeton and Loucks. The angular distributions in the c direction in the paramagnetic phases are characterized by a rapid drop at low angles followed by a hump, and these features are associated with rather flat regions of Fermi...

  11. Electrochemical and positron annihilation studies of semicarbazones and thiosemicarbazones derived from ferrocene

    International Nuclear Information System (INIS)

    A series of six ferrocene derivates containing a semicarbazone or thiosemicarbazone side chain was investigated by cyclic voltammetry and positron annihilation lifetime measurements. Both the redox and the electron capture processes took place on the Fe atom. Correlations between the two methods were proposed. taking into account the substituents on the side chain of the compounds, their redox potentials and the probabilities of o-positronium (o-Ps), formation. (author)

  12. On composition of probability density functions

    Czech Academy of Sciences Publication Activity Database

    Kracík, Jan

    Adelaide: Advanced Knowledge International, 2004 - ( And rýsek, J.; Kárný, M.; Kracík, J.), s. 113-121. (International Series on Advanced Intelligence .. 9). ISBN 0-9751004-5-9. [Workshop on Computer-Intensive Methods in Control and Data Processing 2004. Prague (CZ), 12.05.2004-14.05.2004] R&D Projects: GA ČR GA102/03/0049 Institutional research plan: CEZ:AV0Z1075907 Keywords : multiple participant decision making * sensor fusion Subject RIV: BB - Applied Statistics, Operational Research

  13. Positron and Positronium Annihilation Lifetime, and Free Volume in Polymers.

    Science.gov (United States)

    Yu, Zhibin

    1995-01-01

    Positron annihilation lifetime measurements were carried out for six polycarbonates of different structures and four polystyrenes of different molecular weight over a wide temperature range covering the glass transition region. The o-Ps mean lifetime is very sensitive to the changes of free volume in those polymers which occur due to change of molecular structure, chain length, and temperature. The influence of the unavoidable e^{+} irradiation and physical aging on the mean lifetime and the intensity of o-Ps annihilation were studied by conducting time dependent measurements on both very aged and rejuvenated samples. Both irradiation and physical aging reduce the formation of positronium, but have no effect on the mean lifetime of Ps atoms. The free volume fraction h obtained from the positron lifetime measurements was compared with the prediction of the statistical mechanical theory of Simha and Somcynsky; good agreement was found in the melt state though clear deviations were observed in the glassy state. A free volume quantity, computed from the bulk volume, which is in a good numerical agreement with the Simha-Somcynsky h-function in the melt, gives improved agreement with the h value calculated from the positron lifetime measurements. To investigate certain anomalies observed in the computer analysis of the positron annihilation lifetime spectra on polymers, we developed a computer simulation of the experimental data, which then was used to test the accuracy of the fitting results in the different circumstances. The influence caused by a possible distribution of the o-Ps mean lifetimes and the width of the spectrometer time resolution function were studied. The theoretical connection between the o-Ps mean lifetime and the free volume hole size was reviewed based on a finite spherical potential well model, and the status of the localized Ps atom in polymers was evaluated by calculation of the barrier transmission probability and the escaping probability of the

  14. Interpretations of Negative Probabilities

    OpenAIRE

    Burgin, Mark

    2010-01-01

    In this paper, we give a frequency interpretation of negative probability, as well as of extended probability, demonstrating that to a great extent, these new types of probabilities, behave as conventional probabilities. Extended probability comprises both conventional probability and negative probability. The frequency interpretation of negative probabilities gives supportive evidence to the axiomatic system built in (Burgin, 2009; arXiv:0912.4767) for extended probability as it is demonstra...

  15. Probability Aggregates in Probability Answer Set Programming

    OpenAIRE

    Saad, Emad

    2013-01-01

    Probability answer set programming is a declarative programming that has been shown effective for representing and reasoning about a variety of probability reasoning tasks. However, the lack of probability aggregates, e.g. {\\em expected values}, in the language of disjunctive hybrid probability logic programs (DHPP) disallows the natural and concise representation of many interesting problems. In this paper, we extend DHPP to allow arbitrary probability aggregates. We introduce two types of p...

  16. Nanometer cavities studied by positron annihilation

    International Nuclear Information System (INIS)

    Positronium (Ps) is trapped in cavities in insulating solids, and the lifetime of ortho Ps is determined by the size of the cavity. The information on the properties of the cavities obtained by use of the standard slow positron beam and the 'normal' positron annihilation techniques is compared for several selected cases. (author)

  17. A positron annihilation study of hydrated DNA

    DEFF Research Database (Denmark)

    Warman, J. M.; Eldrup, Morten Mostgaard

    1986-01-01

    Positron annihilation measurements are reported for hydrated DNA as a function of water content and as a function of temperature (20 to -180.degree. C) for samples containing 10 and 50% wt of water. The ortho-positronium mean lifetime and its intensity show distinct variations with the degree of ...

  18. A compact positron annihilation lifetime spectrometer

    Institute of Scientific and Technical Information of China (English)

    李道武; 刘军辉; 章志明; 王宝义; 张天保; 魏龙

    2011-01-01

    Using LYSO scintillator coupled on HAMAMATSU R9800 (a fast photomultiplier) to form the small size γ-ray detectors, a compact lifetime spectrometer has been built for the positron annihilation experiments. The system time resolution FWHM=193 ps and the co

  19. Black Holes as Dark Matter Annihilation Boosters

    OpenAIRE

    Mattia FornasaINFN Padova, IAP; Gianfranco Bertone(IAP)

    2007-01-01

    We review the consequences of the growth and evolution of Black Holes on the distribution of stars and Dark Matter (DM) around them. We focus in particular on Supermassive and Intermediate Mass Black Holes, and discuss under what circumstances they can lead to significant overdensities in the surrounding distribution of DM, thus effectively acting as DM annihilation boosters.

  20. Fermi-LAT constraints on dark matter annihilation cross section from observations of the Fornax cluster

    International Nuclear Information System (INIS)

    We analyze 2.8-yr data of 1–100 GeV photons for clusters of galaxies, collected with the Large Area Telescope onboard the Fermi satellite. By analyzing 49 nearby massive clusters located at high Galactic latitudes, we find no excess gamma-ray emission towards directions of the galaxy clusters. Using flux upper limits, we show that the Fornax cluster provides the most stringent constraints on the dark matter annihilation cross section. Stacking a large sample of nearby clusters does not help improve the limit for most dark matter models. This suggests that a detailed modeling of the Fornax cluster is important for setting robust limits on the dark matter annihilation cross section based on clusters. We therefore perform the detailed mass modeling and predict the expected dark matter annihilation signals from the Fornax cluster, by taking into account effects of dark matter contraction and substructures. By modeling the mass distribution of baryons (stars and gas) around a central bright elliptical galaxy, NGC 1399, and using a modified contraction model motivated by numerical simulations, we show that the dark matter contraction boosts the annihilation signatures by a factor of 4. For dark matter masses around 10 GeV, the upper limit obtained on the annihilation cross section times relative velocity is (σν)∼−25 cm3 s−1, which is within a factor of 10 from the value required to explain the dark matter relic density. This effect is more robust than the annihilation boost due to substructure, and it is more important unless the mass of the smallest subhalos is much smaller than that of the Sun

  1. A Study On Positron - Electron Annihilation In Multi-Wall Carbon Nabitan

    International Nuclear Information System (INIS)

    Positron annihilation in multi-wall carbon nanotube is studied using positron lifetime measurements and positron diffusion theory in nano material. Experimental measurements of positron lifetime were performed on multi-wall carbon nanotube samples having various average radiuses. A closed correlation between positron lifetime on tube surface and tube radius was found, which indicates that positron lifetime measurement could become a useful means for investigating average diameter of multi-wall carbon nanotubes. Furthermore, in this work the positron diffusion theory in nano material modifying from positron diffusion model in nanofilament was used for studying effects of rate of positron runaway k from a nanotube to positron annihilation characteristics. Result shows a strong influence of nanotube radius to the rate of positron runaway and positron lifetime. Beside, value of shape coefficient α for positron diffusion in multi-wall carbon nanotubes also was calculated from experimental data. A quite good agreement between experimental and calculated values of positron annihilation probability on tube surface was result of using average shape coefficient α. Results of study demonstrate that modified positron diffusion model in nanofilament can well apply for researching positron annihilation in multi-wall carbon nanotubes. (author)

  2. Probability theory and mathematical statistics for engineers

    CERN Document Server

    Pugachev, V S

    2014-01-01

    Probability Theory and Mathematical Statistics for Engineers focuses on the concepts of probability theory and mathematical statistics for finite-dimensional random variables.The publication first underscores the probabilities of events, random variables, and numerical characteristics of random variables. Discussions focus on canonical expansions of random vectors, second-order moments of random vectors, generalization of the density concept, entropy of a distribution, direct evaluation of probabilities, and conditional probabilities. The text then examines projections of random vector

  3. Cosmological dynamics in tomographic probability representation

    OpenAIRE

    Man'ko, V. I.; G. Marmo(Università di Napoli and INFN, Napoli, Italy); Stornaiolo, C.

    2004-01-01

    The probability representation for quantum states of the universe in which the states are described by a fair probability distribution instead of wave function (or density matrix) is developed to consider cosmological dynamics. The evolution of the universe state is described by standard positive transition probability (tomographic transition probability) instead of the complex transition probability amplitude (Feynman path integral) of the standard approach. The latter one is expressed in te...

  4. Gaussian Probabilities and Expectation Propagation

    OpenAIRE

    Cunningham, John P.; Hennig, Philipp; Lacoste-Julien, Simon

    2011-01-01

    While Gaussian probability densities are omnipresent in applied mathematics, Gaussian cumulative probabilities are hard to calculate in any but the univariate case. We study the utility of Expectation Propagation (EP) as an approximate integration method for this problem. For rectangular integration regions, the approximation is highly accurate. We also extend the derivations to the more general case of polyhedral integration regions. However, we find that in this polyhedral case, EP's answer...

  5. Modification of steel surfaces induced by turning: non-destructive characterization using Barkhausen noise and positron annihilation

    Science.gov (United States)

    Čížek, J.; Neslušan, M.; Čilliková, M.; Mičietová, A.; Melikhova, O.

    2014-11-01

    This paper deals with the characterization of sub-surface damage caused by the machining of 100Cr6 roll bearing steel. The samples turned using tools with variable flank wears were characterized by two non-destructive techniques sensitive to defects introduced by plastic deformation: magnetic Barkhausen noise and positron annihilation. These techniques were combined with light and electron microscopy, x-ray diffraction and microhardness testing. The results of the experiment showed that damage in the sub-surface region increases with increasing flank wear, but from a certain critical value dynamic recovery takes place. The intensity of Barkhausen noise strongly decreases with increasing flank wear due to the increasing density of the dislocations pinning the Bloch walls and suppressing their motion. This was confirmed by positron annihilation spectroscopy, which enables the determination of the dislocation density directly. Hence, a good correlation between Barkhausen noise emission and positron annihilation spectroscopy was found.

  6. Positron annihilation: the ACAR method measures electron momentum distribution

    International Nuclear Information System (INIS)

    When a positron annihilates with an electron, the energy is dissipated preferentially in the form of antiparallel 0.5 MeV γ-rays, whose angle and Doppler shift correlates with electron momentum density. The Geneva group has built a system which permits the precise and efficient measurement of the ACAR radiation. In ordinary metals, where independent particles methods (IPM) apply, there is often satisfactory agreement between measured and calculated Two Particle Momentum Distributions (TPMD). The same is true for the Fermi Surfaces which can be constructed from TPMD. The effect of correlations can be handled as perturbation. In the case of oxides we found so far no convincing agreement between theory and experiment. We are working to improve apparatus, experiment and theory and hope to understand also our results in High Temperature Superconductors (High Tc Sc)

  7. Positron annihilation spectroscopy of proton irradiated single crystal BCC iron

    Energy Technology Data Exchange (ETDEWEB)

    Okuniewski, Maria A. [Department of Nuclear, Plasma, and Radiological Engineering, University of Illinois, Urbana-Champaign, 103 S. Goodwin Avenue, Urbana, IL 61801 (United States)]. E-mail: okuniews@uiuc.edu; Wells, Doug P. [Department of Physics, Idaho Accelerator Center, Idaho State University, Campus Box 8263, Pocatello, ID 83209 (United States); Selim, Farida A. [Department of Physics, Idaho Accelerator Center, Idaho State University, Campus Box 8263, Pocatello, ID 83209 (United States); Maloy, Stuart A. [Los Alamos National Laboratory, MST-8, P.O. Box 1663, Los Alamos, NM 87545 (United States); James, Michael R. [Los Alamos National Laboratory, D-5, P.O. Box 1663, Los Alamos, NM 87545 (United States); Stubbins, James F. [Department of Nuclear, Plasma, and Radiological Engineering, University of Illinois, Urbana-Champaign, 103 S. Goodwin Avenue, Urbana, IL 61801 (United States); Deo, Chaitanya S. [Los Alamos National Laboratory, MST-8, P.O. Box 1663, Los Alamos, NM 87545 (United States); Srivilliputhur, Srinivasan G. [Los Alamos National Laboratory, MST-8, P.O. Box 1663, Los Alamos, NM 87545 (United States); Baskes, Michael I. [Los Alamos National Laboratory, MST-8, P.O. Box 1663, Los Alamos, NM 87545 (United States)

    2006-06-01

    Positron annihilation spectroscopy was used to analyze the open-volume defects created in single crystal, body-centered cubic iron irradiated with 1.0 MeV protons. The effects of irradiation dose and temperature were investigated. A novel technique utilizing a Bremsstrahlung beam to activate and induce positron decay in the bulk specimens, followed by Doppler broadening spectroscopy, was employed. No open-volume defects were detected in the 0.03 dpa irradiated specimens. However, the 0.3 dpa specimens exhibited an increase in the S parameter when compared to the 0.03 dpa specimens at 723 K. The 0.3 dpa specimen at 723 K indicated an increase in open-volume defects, as the radiation temperature increased compared to the 573 K, 0.3 dpa specimen. This was thought to be a consequence of the void and dislocation loop density decreasing while the void and dislocation loop diameter was increasing.

  8. Positron annihilation spectroscopy of proton irradiated single crystal BCC iron

    International Nuclear Information System (INIS)

    Positron annihilation spectroscopy was used to analyze the open-volume defects created in single crystal, body-centered cubic iron irradiated with 1.0 MeV protons. The effects of irradiation dose and temperature were investigated. A novel technique utilizing a Bremsstrahlung beam to activate and induce positron decay in the bulk specimens, followed by Doppler broadening spectroscopy, was employed. No open-volume defects were detected in the 0.03 dpa irradiated specimens. However, the 0.3 dpa specimens exhibited an increase in the S parameter when compared to the 0.03 dpa specimens at 723 K. The 0.3 dpa specimen at 723 K indicated an increase in open-volume defects, as the radiation temperature increased compared to the 573 K, 0.3 dpa specimen. This was thought to be a consequence of the void and dislocation loop density decreasing while the void and dislocation loop diameter was increasing

  9. Multi-messenger constraints and pressure from dark matter annihilation into e--e+ pairs

    International Nuclear Information System (INIS)

    Despite striking evidence for the existence of dark matter from astrophysical observations, dark matter has still escaped any direct or indirect detection until today. Therefore a proof for its existence and the revelation of its nature belongs to one of the most intriguing challenges of nowadays cosmology and particle physics. The present work tries to investigate the nature of dark matter through indirect signatures from dark matter annihilation into electron-positron pairs in two different ways, pressure from dark matter annihilation and multi-messenger constraints on the dark matter annihilation cross-section. We focus on dark matter annihilation into electron-positron pairs and adopt a model-independent approach, where all the electrons and positrons are injected with the same initial energy E0∝mdmc2. The propagation of these particles is determined by solving the diffusion-loss equation, considering inverse Compton scattering, synchrotron radiation, Coulomb collisions, bremsstrahlung, and ionization. The first part of this work, focusing on pressure from dark matter annihilation, demonstrates that dark matter annihilation into electron-positron pairs may affect the observed rotation curve by a significant amount. The injection rate of this calculation is constrained by INTEGRAL, Fermi, and H.E.S.S. data. The pressure of the relativistic electron-positron gas is computed from the energy spectrum predicted by the diffusion-loss equation. For values of the gas density and magnetic field that are representative of the Milky Way, it is estimated that the pressure gradients are strong enough to balance gravity in the central parts if E00. By comparing the predicted rotation curves with observations of dwarf and low surface brightness galaxies, we show that the pressure from dark matter annihilation may improve the agreement between theory and observations in some cases, but it also imposes severe constraints on the model parameters (most notably, the inner slope

  10. Searching for dark matter annihilation in the Smith high-velocity cloud

    International Nuclear Information System (INIS)

    Recent observations suggest that some high-velocity clouds may be confined by massive dark matter halos. In particular, the proximity and proposed dark matter content of the Smith Cloud make it a tempting target for the indirect detection of dark matter annihilation. We argue that the Smith Cloud may be a better target than some Milky Way dwarf spheroidal satellite galaxies and use γ-ray observations from the Fermi Large Area Telescope to search for a dark matter annihilation signal. No significant γ-ray excess is found coincident with the Smith Cloud, and we set strong limits on the dark matter annihilation cross section assuming a spatially extended dark matter profile consistent with dynamical modeling of the Smith Cloud. Notably, these limits exclude the canonical thermal relic cross section (∼ 3 × 10–26 cm3 s–1) for dark matter masses ≲ 30 GeV annihilating via the b b-bar or τ+τ– channels for certain assumptions of the dark matter density profile; however, uncertainties in the dark matter content of the Smith Cloud may significantly weaken these constraints.

  11. Searching for Dark Matter Annihilation in the Smith High-Velocity Cloud

    Science.gov (United States)

    Drlica-Wagner, Alex; Gomez-Vargas, German A.; Hewitt, John W.; Linden, Tim; Tibaldo, Luigi

    2014-01-01

    Recent observations suggest that some high-velocity clouds may be confined by massive dark matter halos. In particular, the proximity and proposed dark matter content of the Smith Cloud make it a tempting target for the indirect detection of dark matter annihilation. We argue that the Smith Cloud may be a better target than some Milky Way dwarf spheroidal satellite galaxies and use gamma-ray observations from the Fermi Large Area Telescope to search for a dark matter annihilation signal. No significant gamma-ray excess is found coincident with the Smith Cloud, and we set strong limits on the dark matter annihilation cross section assuming a spatially extended dark matter profile consistent with dynamical modeling of the Smith Cloud. Notably, these limits exclude the canonical thermal relic cross section (approximately 3 x 10 (sup -26) cubic centimeters per second) for dark matter masses less than or approximately 30 gigaelectronvolts annihilating via the B/B- bar oscillation or tau/antitau channels for certain assumptions of the dark matter density profile; however, uncertainties in the dark matter content of the Smith Cloud may significantly weaken these constraints.

  12. Modelling the flux distribution function of the extragalactic gamma-ray background from dark matter annihilation

    Science.gov (United States)

    Feyereisen, Michael R.; Ando, Shin'ichiro; Lee, Samuel K.

    2015-09-01

    The one-point function (i.e., the isotropic flux distribution) is a complementary method to (anisotropic) two-point correlations in searches for a gamma-ray dark matter annihilation signature. Using analytical models of structure formation and dark matter halo properties, we compute the gamma-ray flux distribution due to annihilations in extragalactic dark matter halos, as it would be observed by the Fermi Large Area Telescope. Combining the central limit theorem and Monte Carlo sampling, we show that the flux distribution takes the form of a narrow Gaussian of `diffuse' light, with an `unresolved point source' power-law tail as a result of bright halos. We argue that this background due to dark matter constitutes an irreducible and significant background component for point-source annihilation searches with galaxy clusters and dwarf spheroidal galaxies, modifying the predicted signal-to-noise ratio. A study of astrophysical backgrounds to this signal reveals that the shape of the total gamma-ray flux distribution is very sensitive to the contribution of a dark matter component, allowing us to forecast promising one-point upper limits on the annihilation cross-section. We show that by using the flux distribution at only one energy bin, one can probe the canonical cross-section required for explaining the relic density, for dark matter of masses around tens of GeV.

  13. Positron scattering and annihilation from hydrogenlike ions

    International Nuclear Information System (INIS)

    The Kohn variational method is used with a configuration-interaction-type wave function to determine the J=0 and J=1 phase shifts and annihilation parameter Zeff for positron-hydrogenic ion scattering. The phase shifts are within 1-2% of the best previous calculations. The values of Zeff are small and do not exceed unity for any of the momenta considered. At thermal energies Zeff is minute with a value of order 10-50 occurring for He+ at k=0.05a0-1. In addition to the variational calculations, analytic expressions for the phase shift and annihilation parameters within the Coulomb wave Born approximation are derived and used to help elucidate the dynamics of positron collisions with positive ions

  14. Shocking Signals of Dark Matter Annihilation

    CERN Document Server

    Davis, Jonathan H; Boehm, Celine; Kotera, Kumiko; Norman, Colin

    2015-01-01

    We examine whether charged particles injected by self-annihilating Dark Matter into regions undergoing Diffuse Shock Acceleration (DSA) can be accelerated to high energies. We consider three astrophysical sites where shock acceleration is supposed to occur, namely the Galactic Centre, galaxy clusters and Active Galactic Nuclei (AGN). For the Milky Way, we find that the acceleration of cosmic rays injected by dark matter could lead to a bump in the cosmic ray spectrum provided that the product of the efficiency of the acceleration mechanism and the concentration of DM particles is high enough. Among the various acceleration sources that we consider (namely supernova remnants (SNRs), Fermi bubbles and AGN jets), we find that the Fermi bubbles are a potentially more efficient accelerator than SNRs. However both could in principle accelerate electrons and protons injected by dark matter to very high energies. At the extragalactic level, the acceleration of dark matter annihilation products could be responsible fo...

  15. Recent development of positron annihilation methods

    CERN Document Server

    Doyama, M

    2002-01-01

    When positron comes into solid or liquid, it moves in the matter and emitted two gamma rays at the opposite direction, by pair annihilation with electron. Each gamma ray is about 511 keV. The experiments of positron annihilation has been developed by three methods such as angular correlation between two gamma rays, energy analysis of emission gamma ray and positron lifetime. The angular correlation between two gamma rays is determined by gamma ray position detector.The energy analysis was measured by S-W analysis and Coincidence Doppler Broadening (CDB) method. Positron lifetime are determined by gamma-gamma lifetime measurement method, beta sup + -gamma lifetime measurement method and other method using waveform of photomultiplier, and determination of time and frequency of gamma-ray. Positron beam is applied to positron scattering, positron diffraction, low energy positron diffraction (LEPD), PELS, LEPSD, PAES, positron re-emission imaging microscope (PRIM) and positron channeling. The example of CDB method...

  16. Microemulsion systems studied by positron annihilation techniques

    International Nuclear Information System (INIS)

    The formation of thermalized positronium atoms is greatly reduced if increasing amounts of water become solubilized in reversed micelles formed by sodium bis(2-ethylhexyl) sulfosuccinate in apolar solvents. Similar observations have been made if the surfactant is Triton X-100. The application of the positron annihilation technique to the study of microemulsions consisting of potassium oleate-alcohol-oil-water mixtures indicates, consistent with previous results, that microemulsion formation requires a certain water/oil ratio if the oil is a long-chain aliphatic hydrocarbon such as hexadecane. This ratio is 0.4 in the case of a 1-pentanol- and 0.2 for a 1-hexanol-containing mixture. This minimum water content is strongly reduced if the oil is an aromatic hydrocarbon. The positron annihilation data also sensitively reflect structural rearrangements in these solutions occurring upon further addition of water, such as the transition of spherical aggregates to a disk-like lamellae structure

  17. Baryon production in e+e- annihilation

    International Nuclear Information System (INIS)

    The phenomenology of baryon production in high energy e+e-annihilation is described. Much can be understood in terms of mass effects. Comparisons with the rates for different flavours and spins, with momentum and transverse momentum spectra and with particle correlations are used to confront models. Diquark models give good descriptions, except for the on/off Υ(1s) rates. Areas for experimental and theoretical development are indicated. (author)

  18. Positron Annihilation Lifetimes in Compacted Iron Powder

    International Nuclear Information System (INIS)

    The positron annihilation lifetime (PAL) spectroscopy has been performed on iron powder as a function of compacted powder load. The ortho-positronium lifetime increases from 1.45 to 2.55 ns with compaction load increment from 30 to 50 tons. By increasing the compaction load, the ultimate stress and hardness increases and the ductlity decreases. The result shows that there is a direct correlation between the void size and the load decrement. These results will be presented and discussed

  19. Micellar systems studied by positron annihilation techniques

    International Nuclear Information System (INIS)

    The positron annihilation technique was applied to the study of the micelle formation process in aqueous and reversed micellar systems and to the determination of the site at which solubilizates become incorporated into the micelle. Furthermore, the effect of additives on the surfactant concentration at which a cooperative effect of reverse micellar solutions becomes observable was investigated and the location of the additives in aqueous micelles determined

  20. A positron annihilation study of hydrated DNA

    DEFF Research Database (Denmark)

    Warman, J. M.; Eldrup, Morten Mostgaard

    1986-01-01

    Positron annihilation measurements are reported for hydrated DNA as a function of water content and as a function of temperature (20 to -180.degree. C) for samples containing 10 and 50% wt of water. The ortho-positronium mean lifetime and its intensity show distinct variations with the degree of...... content until, for approximately 50% water, its properties resemble more those of a highly viscous fluid....

  1. On the Annihilation Rate of WIMPs

    OpenAIRE

    Baumgart, Matthew; Rothstein, Ira Z.; Vaidya, Varun

    2014-01-01

    We develop a formalism that allows one to systematically calculate the WIMP annihilation rate into gamma rays whose energy far exceeds the weak scale. A factorization theorem is presented which separates the radiative corrections stemming from initial state potential interactions from loops involving the final state. This separation allows us to go beyond the fixed order calculation, which is polluted by large infrared logarithms. For the case of Majorana WIMPs transforming in the adjoint rep...

  2. Surfaces of colloidal PbSe nanocrystals probed by thin-film positron annihilation spectroscopy

    International Nuclear Information System (INIS)

    Positron annihilation lifetime spectroscopy and positron-electron momentum density (PEMD) studies on multilayers of PbSe nanocrystals (NCs), supported by transmission electron microscopy, show that positrons are strongly trapped at NC surfaces, where they provide insight into the surface composition and electronic structure of PbSe NCs. Our analysis indicates abundant annihilation of positrons with Se electrons at the NC surfaces and with O electrons of the oleic ligands bound to Pb ad-atoms at the NC surfaces, which demonstrates that positrons can be used as a sensitive probe to investigate the surface physics and chemistry of nanocrystals inside multilayers. Ab initio electronic structure calculations provide detailed insight in the valence and semi-core electron contributions to the positron-electron momentum density of PbSe. Both lifetime and PEMD are found to correlate with changes in the particle morphology characteristic of partial ligand removal

  3. Surfaces of colloidal PbSe nanocrystals probed by thin-film positron annihilation spectroscopy

    Directory of Open Access Journals (Sweden)

    L. Chai

    2013-08-01

    Full Text Available Positron annihilation lifetime spectroscopy and positron-electron momentum density (PEMD studies on multilayers of PbSe nanocrystals (NCs, supported by transmission electron microscopy, show that positrons are strongly trapped at NC surfaces, where they provide insight into the surface composition and electronic structure of PbSe NCs. Our analysis indicates abundant annihilation of positrons with Se electrons at the NC surfaces and with O electrons of the oleic ligands bound to Pb ad-atoms at the NC surfaces, which demonstrates that positrons can be used as a sensitive probe to investigate the surface physics and chemistry of nanocrystals inside multilayers. Ab initio electronic structure calculations provide detailed insight in the valence and semi-core electron contributions to the positron-electron momentum density of PbSe. Both lifetime and PEMD are found to correlate with changes in the particle morphology characteristic of partial ligand removal.

  4. Positron annihilation spectroscopy in condensed matter

    International Nuclear Information System (INIS)

    The topic of positron annihilation spectroscopy (PAS) is the investigation of all aspects connected with the annihilation of slow positrons. This work deals with the application of PAS to different problems of materials science. The first chapter is an introduction to fundamental aspects of positron annihilation, as far as they are important to the different experimental techniques of PAS. Chapter 2 is concerned with the information obtainable by PAS. The three main experimental techniques of PAS (2γ-angular correlation, positron lifetime and Doppler broadening) are explained and problems in the application of these methods are discussed. Chapter 3 contains experimental results. According to the different fields of application it was subgrouped into: 1. Investigations of crystalline solids. Detection of structural defects in Cu, estimation of defect concentrations, study of the sintering of Cu powders as well as lattice defects in V3Si. 2. Chemical investigations. Structure of mixed solvents, selective solvation of mixed solvents by electrolytes as well as the micellization of sodium dodecylsulphate in aqueous solutions. 3. Investigations of glasses. Influence of heat treatment and production technology on the preorder of X-amorphous silica glass as well as preliminary measurements of pyrocerams. 4. Investigations of metallic glasses. Demonstration of the influence of production technology on parameters measurable by PAS. Chapter 4 contains a summary as well as an outlook of further applications of PAS to surface physics, medicine, biology and astrophysics. (author)

  5. Annihilation of low energy antiprotons in silicon

    CERN Document Server

    Aghion, S; Belov, A S; Bonomi, G; Bräunig, P; Bremer, J; Brusa, R S; Burghart, G; Cabaret, L; Caccia, M; Canali, C; Caravita, R; Castelli, F; Cerchiari, G; Cialdi, S; Comparat, D; Consolati, G; Derking, J H; Di Domizio, S; Di Noto, L; Doser, M; Dudarev, A; Ferragut, R; Fontana, A; Genova, P; Giammarchi, M; Gligorova, A; Gninenko, S N; Haider, S; Harasimovic, J; Huse, T; Jordan, E; Jørgensen, L V; Kaltenbacher, T; Kellerbauer, A; Knecht, A; Krasnický, D; Lagomarsino, V; Magnani, A; Mariazzi, S; Matveev, V A; Moia, F; Nebbia, G; Nédélec, P; Pacifico, N; Petrácek, V; Prelz, F; Prevedelli, M; Regenfus, C; Riccardi, C; Røhne, O; Rotondi, A; Sandaker, H; Susa, A; Vasquez, M A Subieta; Špacek, M; Testera, G; Welsch, C P; Zavatarelli, S

    2013-01-01

    The goal of the AE$\\mathrm{\\bar{g}}$IS experiment at the Antiproton Decelerator (AD) at CERN, is to measure directly the Earth's gravitational acceleration on antimatter. To achieve this goal, the AE$\\mathrm{\\bar{g}}$IS collaboration will produce a pulsed, cold (100 mK) antihydrogen beam with a velocity of a few 100 m/s and measure the magnitude of the vertical deflection of the beam from a straight path. The final position of the falling antihydrogen will be detected by a position sensitive detector. This detector will consist of an active silicon part, where the annihilations take place, followed by an emulsion part. Together, they allow to achieve 1$%$ precision on the measurement of $\\bar{g}$ with about 600 reconstructed and time tagged annihilations. We present here, to the best of our knowledge, the first direct measurement of antiproton annihilation in a segmented silicon sensor, the first step towards designing a position sensitive silicon detector for the AE$\\mathrm{\\bar{g}}$IS experiment. We also pr...

  6. Electroweak fragmentation functions for dark matter annihilation

    International Nuclear Information System (INIS)

    Electroweak corrections can play a crucial role in dark matter annihilation. The emission of gauge bosons, in particular, leads to a secondary flux consisting of all Standard Model particles, and may be described by electroweak fragmentation functions. To assess the quality of the fragmentation function approximation to electroweak radiation in dark matter annihilation, we have calculated the flux of secondary particles from gauge-boson emission in models with Majorana fermion and vector dark matter, respectively. For both models, we have compared cross sections and energy spectra of positrons and antiprotons after propagation through the galactic halo in the fragmentation function approximation and in the full calculation. Fragmentation functions fail to describe the particle fluxes in the case of Majorana fermion annihilation into light fermions: the helicity suppression of the lowest-order cross section in such models cannot be lifted by the leading logarithmic contributions included in the fragmentation function approach. However, for other classes of models like vector dark matter, where the lowest-order cross section is not suppressed, electroweak fragmentation functions provide a simple, model-independent and accurate description of secondary particle fluxes

  7. Delayed annihilation of antiprotons in helium gas

    International Nuclear Information System (INIS)

    The delayed annihilation of antiprotons, which was recently discovered in liquid 4He at KEK, has been studied at CERN in gas-phase 4He and 3He. The annihilation time spectra in gas 4He at various pressures were found to be similar to that for liquid 4He. The observed average lifetime in the region t > 1μsec for 3 atm 4He was about 3.2μsec, while for 3 atm 3He gas it was 2.8μsec, i.e. shorter by 15 %. The time spectra show a growth-decay type function, which is indicative of the presence of a series of metastable states. For 4He and 3He they have nearly identical shapes, differing only in the time scale by 14 ± 3 %. These observations are qualitatively consistent with the atomic model of p-bare-He++ proposed by Condo. The time spectra were found to be sensitive to the presence of small amounts (as small as 20 ppm) of H2. No evidence was seen for delayed annihilation in gaseous Ne. 36.10. (author)

  8. Asymptotic Expansions of the Probability Density Function and the Distribution Function of Chi-Square Distribution%卡方分布密度函数与分布函数的渐近展开

    Institute of Scientific and Technical Information of China (English)

    陈刚; 王梦婕

    2014-01-01

    通过对χ2分布概率密度函数的自变量进行标准化变换,将其展开成如下形式:2nχ2( x;n)=1+r1(t)n +r2(t)n +r3(t)n n +r4(t)n2éëùûφ(t)+o 1n2(),其中n为自由度,φ(t)为标准正态分布的密度函数,ri(t)(1≤i≤4)均为关于t的多项式。从该展开式得到χ2分布密度函数的一个近似计算公式。进一步建立φ( t)的幂系数积分递推关系,得到χ2分布函数的渐近展开式。最后通过数值计算验证了这些结果在实际应用中的有效性。%Through the transformation of the independent variable of χ2 distribution probability density function,degree of freedom of which is n,the equation can be expanded as follows: 2nχ2(x;n)=f(t;n)= 1+r1(t)n +r2(t)n +r3(t)n n +r4(t)n2éë ùûφ(t)+o 1n2( ) ,here,φ(t) is a density function of standard normal distribution;ri(t) is a 3i order polynomial of t(1≤i≤4). An approximate formula can be obtained from the expansion of the distribution density function. We further establish the integral recurrence relations of the power coefficients of the standard normal density function and obtain the asymptotic expansion of the distribution function ofχ2 . Finally,the effectiveness of these results in practical application was verified by the numerical calculations.

  9. Probability distributions of landslide volumes

    OpenAIRE

    M. T. Brunetti; Guzzetti, F.; M. Rossi

    2009-01-01

    We examine 19 datasets with measurements of landslide volume, VL, for sub-aerial, submarine, and extraterrestrial mass movements. Individual datasets include from 17 to 1019 landslides of different types, including rock fall, rock slide, rock avalanche, soil slide, slide, and debris flow, with individual landslide volumes ranging over 10−4 m3≤VL≤1013 m3. We determine the probability density of landslide volumes, p(VL), using kernel density estimation. Each landslide...

  10. On Probability Leakage

    OpenAIRE

    Briggs, William M

    2012-01-01

    The probability leakage of model M with respect to evidence E is defined. Probability leakage is a kind of model error. It occurs when M implies that events $y$, which are impossible given E, have positive probability. Leakage does not imply model falsification. Models with probability leakage cannot be calibrated empirically. Regression models, which are ubiquitous in statistical practice, often evince probability leakage.

  11. Weak annihilation cusp inside the dark matter spike about a black hole

    CERN Document Server

    Shapiro, Stuart L

    2016-01-01

    We reinvestigate the effect of annihilations on the distribution of collisionless dark matter (DM) in a spherical density spike around a massive black hole. We first construct a very simple, pedagogic, analytic model for an isotropic phase space distribution function that accounts for annihilation and reproduces the "weak cusp" found by Vasiliev for DM deep within the spike and away from its boundaries. The DM density in the cusp varies as $r^{-1/2}$ for $s$-wave annihilation, where $r$ is the distance from the central black hole, and is not a flat "plateau" profile. We then extend this model by incorporating a loss cone that accounts for the capture of DM particles by the hole. The loss cone is implemented by a boundary condition that removes capture orbits, resulting in an anisotropic distribution function. Finally, we evolve an initial spike distribution function by integrating the Boltzmann equation to show how the weak cusp grows and its density decreases with time. We treat two cases, one for $s$-wave a...

  12. Positron distribution contribution to changes in annihilation characteristics across Tc in high-temperature superconductors

    International Nuclear Information System (INIS)

    In this paper we present detailed calculations of the positron distribution in a host of high-temperature superconductors using the electron densities and potentials obtained from self-consistent orthogonalized linear combination of atomic orbitals band-structure calculations. The positron and electron densities obtained from the calculations are used to evaluate the electron-positron overlap function, which reveals that the major contribution to positron annihilation in these materials is from the oxygen atoms. A systematic correlation between the nature of this overlap function within the Cu-O cluster and the experimentally observed temperature dependence of the annihilation characteristics in the superconducting state is established: A decrease in positron annihilation parameters, below Tc, is observed when the overlap is predominantly from the apical oxygen atom, whereas an increase is observed if the overlap is predominantly from the planar oxygen atom. The observed temperature dependence of the positron parameters below Tc in all the high-Tc superconductors is understood in terms of an electron density transfer from the planar oxygen atoms to the apical oxygen atoms. These results are discussed in the light of charge-transfer models of superconductivity in the cuprate superconductors

  13. Probability 1/e

    Science.gov (United States)

    Koo, Reginald; Jones, Martin L.

    2011-01-01

    Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.

  14. Probability an introduction

    CERN Document Server

    Goldberg, Samuel

    2013-01-01

    Excellent basic text covers set theory, probability theory for finite sample spaces, binomial theorem, probability distributions, means, standard deviations, probability function of binomial distribution, more. Includes 360 problems with answers for half.

  15. Modeling momentum distributions of positron annihilation radiation in solids

    OpenAIRE

    Makkonen, Ilja

    2007-01-01

    Positron annihilation spectroscopy is a materials characterization method especially applicable for studying vacancy defects in solids. In typical crystal lattices positrons get trapped at vacancy-type defects. By measuring positron lifetimes and momentum distributions of positron annihilation radiation one obtains information about the open volumes and the chemical environments of the defects. Computational tools can be used in the analysis of positron annihilation experiments. Calculate...

  16. On e+e- annihilation to fire photons

    International Nuclear Information System (INIS)

    The cross section of electron and positron annihilation at high energy equalling 5 photons is calculated using the method of spiral amplitudes. Kinematics corresponding to the events when in the beam inertia center system the angles between photon pulses and beam axes are not small, is considered. Validation of total cross section of multiphoton annihilation of a pair at high energies are presented. Annihilation channels of orthopositronium with 3 and 5 photons are considered. Exact expression for spiral amplitudes is presented

  17. Symmetry and QED tests in rare annihilation modes of positronium

    International Nuclear Information System (INIS)

    Recent experiments on positronium annihilation have confirmed QED calculations at high orders of alpha and tested discrete fundamental symmetries. These measurements search for rare modes of annihilation which are distinguished from backgrounds by their specific decay signatures. New developments in beyond Standard Model theory provide motivation for new measurements of such decays. A brief history of searches for rare annihilation modes of Ps is given. Recent experimental and theoretical developments are reviewed. Experiments currently being planned are discussed

  18. Entanglement-annihilating and entanglement-breaking channels

    OpenAIRE

    Moravčíková, Lenka; Ziman, Mario

    2010-01-01

    We introduce and investigate a family of entanglement-annihilating channels. These channels are capable of destroying any quantum entanglement within the system they act on. We show that they are not necessarily entanglement breaking. In order to achieve this result we analyze the subset of locally entanglement-annihilating channels. In this case, the same local noise applied on each subsystem individually is less entanglement annihilating (with respect to multi-partite entanglement) as the n...

  19. Probability Representation of Quantum Mechanics: Comments and Bibliography

    OpenAIRE

    Man'ko, V. I.; Pilyavets, O. V.; Zborovskii, V. G.

    2006-01-01

    The probability representation of states in standard quantum mechanics where the quantum states are associated with fair probability distributions (instead of wave function or density matrix) is shortly commented and bibliography related to the probability representation is given.

  20. Branching-rate expansion around annihilating random walks.

    Science.gov (United States)

    Benitez, Federico; Wschebor, Nicolás

    2012-07-01

    We present some exact results for branching and annihilating random walks. We compute the nonuniversal threshold value of the annihilation rate for having a phase transition in the simplest reaction-diffusion system belonging to the directed percolation universality class. Also, we show that the accepted scenario for the appearance of a phase transition in the parity conserving universality class must be improved. In order to obtain these results we perform an expansion in the branching rate around pure annihilation, a theory without branching. This expansion is possible because we manage to solve pure annihilation exactly in any dimension. PMID:23005353

  1. Estimating Small Probabilities for Langevin Dynamics

    OpenAIRE

    Aristoff, David

    2012-01-01

    The problem of estimating small transition probabilities for overdamped Langevin dynamics is considered. A simplification of Girsanov's formula is obtained in which the relationship between the infinitesimal generator of the underlying diffusion and the change of probability measure corresponding to a change in the potential energy is made explicit. From this formula an asymptotic expression for transition probability densities is derived. Separately the problem of estimating the probability ...

  2. Hypernormal Densities

    OpenAIRE

    Giacomini, Raffaella; Gottschling, Andreas; Haefke, Christian; White, Halbert

    2002-01-01

    We derive a new family of probability densities that have the property of closed-form integrability. This flexible family finds a variety of applications, of which we illustrate density forecasting from models of the AR-ARCH class for U.S. inflation. We find that the hypernormal distribution for the model's disturbances leads to better density forecasts than the ones produced under the assumption that the disturbances are Normal or Student's t.

  3. Particle-antiparticle asymmetries from annihilations

    CERN Document Server

    Baldes, Iason; Petraki, Kalliopi; Volkas, Raymond R

    2014-01-01

    An extensively studied mechanism to create particle-antiparticle asymmetries is the out-of-equilibrium and CP violating decay of a heavy particle. Here we instead examine how asymmetries can arise purely from 2 2 annihilations rather than from the usual 1 2 decays and inverse decays. We review the general conditions on the reaction rates that arise from S-matrix unitarity and CPT invariance, and show how these are implemented in the context of a simple toy model. We formulate the Boltzmann equations for this model, and present an example solution.

  4. Apparatus for photon activation positron annihilation analysis

    Science.gov (United States)

    Akers, Douglas W.

    2007-06-12

    Non-destructive testing apparatus according to one embodiment of the invention comprises a photon source. The photon source produces photons having predetermined energies and directs the photons toward a specimen being tested. The photons from the photon source result in the creation of positrons within the specimen being tested. A detector positioned adjacent the specimen being tested detects gamma rays produced by annihilation of positrons with electrons. A data processing system operatively associated with the detector produces output data indicative of a lattice characteristic of the specimen being tested.

  5. Deuteron production in e+e--annihilation

    International Nuclear Information System (INIS)

    We argue that in e+e--annihilation (including υ-decay) deuteron production should be given by the overlap of the deuteron wave function with the wave function of a pn-pair. The production rate depends sensitively upon the size of the production region. Taking into account the strong correlation between protons and neutrons, experimental results for υ-decay are consistent with the size expected in the Lund string fragmentation model. A prediction is given for the deuteron production in Z-decay

  6. Non-Archimedean Probability

    OpenAIRE

    Benci, Vieri; Horsten, Leon; Wenmackers, Sylvia

    2011-01-01

    We propose an alternative approach to probability theory closely related to the framework of numerosity theory: non-Archimedean probability (NAP). In our approach, unlike in classical probability theory, all subsets of an infinite sample space are measurable and zero- and unit-probability events pose no particular epistemological problems. We use a non-Archimedean field as the range of the probability function. As a result, the property of countable additivity in Kolmogorov's axiomatization o...

  7. Probability and paternity testing.

    OpenAIRE

    Elston, R C

    1986-01-01

    A probability can be viewed as an estimate of a variable that is sometimes 1 and sometimes 0. To have validity, the probability must equal the expected value of that variable. To have utility, the average squared deviation of the probability from the value of that variable should be small. It is shown that probabilities of paternity calculated by the use of Bayes' theorem under appropriate assumptions are valid, but they can vary in utility. In particular, a recently proposed probability of p...

  8. Logical Probability Preferences

    OpenAIRE

    Saad, Emad

    2013-01-01

    We present a unified logical framework for representing and reasoning about both probability quantitative and qualitative preferences in probability answer set programming, called probability answer set optimization programs. The proposed framework is vital to allow defining probability quantitative preferences over the possible outcomes of qualitative preferences. We show the application of probability answer set optimization programs to a variant of the well-known nurse restoring problem, c...

  9. Alternans amplification following a two-stimulations protocol in a one-dimensional cardiac ionic model of reentry: from annihilation to double-wave quasiperiodic reentry

    CERN Document Server

    Comtois, P; Comtois, Philippe; Vinet, Alain

    2003-01-01

    Electrical pacing is a common procedure that is used in both experimental and clinical settings for studying and/or annihilating anatomical reentry. In a recent study [Comtois and Vinet, Chaos 12, 903 (2002)], new ways to terminate the one-dimensional reentry using a simple protocol consisting of only two stimulations were discovered. The probability of annihilating the reentrant activity is much more probable by these new scenarios than by the usual local unidirectional block. This paper is an extension of the previous study in which the sensitivity of the new scenarios of annihilation to the pathway length is studied. It follows that reentry can be stopped over a limited interval of the pathway length and that increasing the length beyond the upper limit of this interval yields to a transition to sustained double-wave reentry. A similar dynamical mechanism, labeled alternans amplification, is found to be responsible for both behaviors.

  10. 基于概率密度演化的渡槽结构抗震分析%Seismic Analysis of Large-scale Aqueduct Structures Based on the Probability Density Evolution Method

    Institute of Scientific and Technical Information of China (English)

    曾波; 邢彦富; 刘章军

    2014-01-01

    Using the orthogonal expansion method of random processes,the non-stationary seismic acceleration process is represented as a linear combination of the standard orthogonal basis func-tions and the standard orthogonal random variables.Then,using the random function,these stand-ard orthogonal random variables in the orthogonal expansion are expressed as an orthogonal func-tion form of the basic random variable.Therefore,this method can use a basic random variable to express the original earthquake ground processes.The orthogonal expansion-random function ap-proach was used to generate 126 representative earthquake samples,and each representative sam-ple was assigned a given probability.The 126 representative earthquake samples were combined with the probability density evolution method of stochastic dynamical systems and random seis-mic responses of large-scale aqueduct structures was investigated.In this study,four cases were considered;aqueduct without water,aqueduct with water in the central trough,aqueduct with wa-ter in a two-side trough,and aqueduct with water in three troughs,and probability information of seismic responses for these cases were obtained.Moreover,using the proposed method,the seis-mic reliability of the aqueduct structures was efficiently calculated.This method provides a new and effective means for precise seismic analysis of large-scale aqueduct structures.%应用随机过程的正交展开方法,将地震动加速度过程展开为标准正交基函数与标准正交随机变量的线性组合形式。在此基础上采用随机函数的思想,将正交展开式中的标准正交随机变量表达为基本随机变量的函数形式,从而实现用一个基本随机变量来表达原地震动过程的目的。结合地震动过程的正交展开-随机函数模型与概率密度演化方法,对某大型渡槽结构进行随机地震反应分析与抗震可靠度计算;重点研究空槽和三槽有水等四种工况下渡槽结构

  11. Search for dark matter annihilation in the Galactic Center with IceCube-79

    International Nuclear Information System (INIS)

    The Milky Way is expected to be embedded in a halo of dark matter particles, with the highest density in the central region, and decreasing density with the halo-centric radius. Dark matter might be indirectly detectable at Earth through a flux of stable particles generated in dark matter annihilations and peaked in the direction of the Galactic Center. We present a search for an excess flux of muon (anti-) neutrinos from dark matter annihilation in the Galactic Center using the cubic-kilometer-sized IceCube neutrino detector at the South Pole. There, the Galactic Center is always seen above the horizon. Thus, new and dedicated veto techniques against atmospheric muons are required to make the southern hemisphere accessible for IceCube. We used 319.7 live-days of data from IceCube operating in its 79-string configuration during 2010 and 2011. No neutrino excess was found and the final result is compatible with the background. We present upper limits on the self-annihilation cross-section, left angle σA right angle, for WIMP masses ranging from 30 GeV up to 10 TeV, assuming cuspy (NFW) and flat-cored (Burkert) dark matter halo profiles, reaching down to ≅ 4 . 10-24 cm3s-1, and ≅ 2.6 . 10-23 cm3s-1 for the νanti ν channel, respectively. (orig.)

  12. Search for dark matter annihilation in the Galactic Center with IceCube-79

    Energy Technology Data Exchange (ETDEWEB)

    Aartsen, M.G.; Hill, G.C.; Robertson, S.; Whelan, B.J. [University of Adelaide, School of Chemistry and Physics, Adelaide, SA (Australia); Abraham, K.; Bernhard, A.; Coenders, S.; Gross, A.; Holzapfel, K.; Huber, M.; Jurkovic, M.; Krings, K.; Resconi, E.; Veenkamp, J. [Technische Universitaet Muenchen, Garching (Germany); Ackermann, M.; Berghaus, P.; Bernardini, E.; Bretz, H.P.; Cruz Silva, A.H.; Gluesenkamp, T.; Gora, D.; Jacobi, E.; Kaminsky, B.; Karg, T.; Middell, E.; Mohrmann, L.; Nahnhauer, R.; Schoenwald, A.; Shanidze, R.; Spiering, C.; Stasik, A.; Stoessl, A.; Strotjohann, N.L.; Terliuk, A.; Usner, M.; Yanez, J.P. [DESY, Zeuthen (Germany); Adams, J.; Brown, A.M. [University of Canterbury, Department of Physics and Astronomy, Private Bag 4800, Christchurch (New Zealand); Aguilar, J.A.; Heereman, D.; Meagher, K.; Meures, T.; O' Murchadha, A.; Pinat, E. [Universite Libre de Bruxelles, Science Faculty CP230, Brussels (Belgium); Ahlers, M.; Arguelles, C.; Beiser, E.; BenZvi, S.; Braun, J.; Chirkin, D.; Day, M.; Desiati, P.; Diaz-Velez, J.C.; Fadiran, O.; Fahey, S.; Feintzeig, J.; Ghorbani, K.; Gladstone, L.; Halzen, F.; Hanson, K.; Hoshina, K.; Jero, K.; Karle, A.; Kelley, J.L.; Kheirandish, A.; McNally, F.; Merino, G.; Middlemas, E.; Morse, R.; Richter, S.; Sabbatini, L.; Tobin, M.N.; Tosi, D.; Vandenbroucke, J.; Van Santen, J.; Wandkowsky, N.; Weaver, C.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wille, L. [Wisconsin IceCube Particle Astrophysics Center, University of Wisconsin, Department of Physics, Madison, WI (United States); Ahrens, M.; Bohm, C.; Dumm, J.P.; Finley, C.; Flis, S.; Hulth, P.O.; Hultqvist, K.; Walck, C.; Wolf, M.; Zoll, M. [Oskar Klein Centre, Stockholm University, Department of Physics, Stockholm (Sweden); Altmann, D.; Classen, L.; Kappes, A.; Tselengidou, M. [Friedrich-Alexander-Universitaet Erlangen-Nuernberg, Erlangen Centre for Astroparticle Physics, Erlangen (Germany); Anderson, T.; Arlen, T.C.; Dunkman, M.; Eagan, R.; Groh, J.C.; Huang, F.; Keivani, A.; Lanfranchi, J.L.; Quinnan, M.; Smith, M.W.E.; Stanisha, N.A.; Tesic, G. [Pennsylvania State University, Department of Physics, University Park, PA (United States); Archinger, M.; Baum, V.; Boeser, S.; Eberhardt, B.; Ehrhardt, T.; Koepke, L.; Kroll, G.; Luenemann, J.; Sander, H.G.; Schatto, K.; Wiebe, K. [University of Mainz, Institute of Physics, Mainz (Germany); Auffenberg, J.; Bissok, M.; Blumenthal, J.; Glagla, M.; Gier, D.; Gretskov, P.; Haack, C.; Hansmann, B.; Hellwig, D.; Kemp, J.; Konietz, R.; Koob, A.; Leuermann, M.; Leuner, J.; Paul, L.; Puetz, J.; Raedel, L.; Reimann, R.; Rongen, M.; Schimp, M.; Schoenen, S.; Schukraft, A.; Stahlberg, M.; Vehring, M.; Wallraff, M.; Wichary, C.; Wiebusch, C.H. [RWTH Aachen University, III. Physikalisches Institut, Aachen (Germany); Bai, X. [South Dakota School of Mines and Technology, Physics Department, Rapid City, SD (United States); Barwick, S.W.; Yodh, G. [University of California, Department of Physics and Astronomy, Irvine, CA (United States); Bay, R.; Filimonov, K.; Price, P.B.; Woschnagg, K. [University of California, Department of Physics, Berkeley, CA (United States); Beatty, J.J. [Ohio State University, Department of Physics and Center for Cosmology and Astro-Particle Physics, Columbus, OH (United States); Ohio State University, Department of Astronomy, Columbus, OH (United States); Becker Tjus, J.; Bos, F.; Eichmann, B.; Fedynitch, A.; Kroll, M.; Saba, S.M.; Schoeneberg, S. [Ruhr-Universitaet Bochum, Fakultaet fuer Physik and Astronomie, Bochum (Germany); Becker, K.H.; Bindig, D.; Fischer-Wasels, T.; Helbing, K.; Hickford, S.; Hoffmann, R.; Klaes, J.; Kopper, S.; Naumann, U.; Obertacke, A.; Omairat, A.; Posselt, J.; Soldin, D. [University of Wuppertal, Department of Physics, Wuppertal (Germany); Berley, D.; Blaufuss, E.; Cheung, E.; Christy, B.; Felde, J.; Hellauer, R.; Hoffman, K.D.; Huelsnitz, W.; Maunu, R.; Olivas, A.; Redl, P.; Schmidt, T.; Sullivan, G.W.; Wissing, H. [University of Maryland, Department of Physics, College Park, MD (United States); Besson, D.Z. [University of Kansas, Department of Physics and Astronomy, Lawrence, KS (United States); Binder, G.; Gerhardt, L.; Ha, C.; Klein, S.R.; Miarecki, S. [University of California, Department of Physics, Berkeley, CA (United States); Lawrence Berkeley National Laboratory, Berkeley, CA (United States); Boersma, D.J.; Botner, O.; Euler, S.; Hallgren, A.; Collaboration: IceCube Collaboration; and others

    2015-10-15

    The Milky Way is expected to be embedded in a halo of dark matter particles, with the highest density in the central region, and decreasing density with the halo-centric radius. Dark matter might be indirectly detectable at Earth through a flux of stable particles generated in dark matter annihilations and peaked in the direction of the Galactic Center. We present a search for an excess flux of muon (anti-) neutrinos from dark matter annihilation in the Galactic Center using the cubic-kilometer-sized IceCube neutrino detector at the South Pole. There, the Galactic Center is always seen above the horizon. Thus, new and dedicated veto techniques against atmospheric muons are required to make the southern hemisphere accessible for IceCube. We used 319.7 live-days of data from IceCube operating in its 79-string configuration during 2010 and 2011. No neutrino excess was found and the final result is compatible with the background. We present upper limits on the self-annihilation cross-section, left angle σ{sub A} right angle, for WIMP masses ranging from 30 GeV up to 10 TeV, assuming cuspy (NFW) and flat-cored (Burkert) dark matter halo profiles, reaching down to ≅ 4 . 10{sup -24} cm{sup 3}s{sup -1}, and ≅ 2.6 . 10{sup -23} cm{sup 3}s{sup -1} for the νanti ν channel, respectively. (orig.)

  13. Search for Dark Matter Annihilation in the Galactic Center with IceCube-79

    CERN Document Server

    Aartsen, M G; Ackermann, M; Adams, J; Aguilar, J A; Ahlers, M; Ahrens, M; Altmann, D; Anderson, T; Archinger, M; Arguelles, C; Arlen, T C; Auffenberg, J; Bai, X; Barwick, S W; Baum, V; Bay, R; Beatty, J J; Tjus, J Becker; Becker, K -H; Beiser, E; BenZvi, S; Berghaus, P; Berley, D; Bernardini, E; Bernhard, A; Besson, D Z; Binder, G; Bindig, D; Bissok, M; Blaufuss, E; Blumenthal, J; Boersma, D J; Bohm, C; Börner, M; Bos, F; Bose, D; Böser, S; Botner, O; Braun, J; Brayeur, L; Bretz, H -P; Brown, A M; Buzinsky, N; Casey, J; Casier, M; Cheung, E; Chirkin, D; Christov, A; Christy, B; Clark, K; Classen, L; Coenders, S; Cowen, D F; Silva, A H Cruz; Daughhetee, J; Davis, J C; Day, M; de André, J P A M; De Clercq, C; Dembinski, H; De Ridder, S; Desiati, P; de Vries, K D; de Wasseige, G; de With, M; DeYoung, T; Díaz-Vélez, J C; Dumm, J P; Dunkman, M; Eagan, R; Eberhardt, B; Ehrhardt, T; Eichmann, B; Euler, S; Evenson, P A; Fadiran, O; Fahey, S; Fazely, A R; Fedynitch, A; Feintzeig, J; Felde, J; Filimonov, K; Finley, C; Fischer-Wasels, T; Flis, S; Fuchs, T; Glagla, M; Gaisser, T K; Gaior, R; Gallagher, J; Gerhardt, L; Ghorbani, K; Gier, D; Gladstone, L; Glüsenkamp, T; Goldschmidt, A; Golup, G; Gonzalez, J G; Góra, D; Grant, D; Gretskov, P; Groh, J C; Groß, A; Ha, C; Haack, C; Ismail, A Haj; Hallgren, A; Halzen, F; Hansmann, B; Hanson, K; Hebecker, D; Heereman, D; Helbing, K; Hellauer, R; Hellwig, D; Hickford, S; Hignight, J; Hill, G C; Hoffman, K D; Hoffmann, R; Holzapfe, K; Homeier, A; Hoshina, K; Huang, F; Huber, M; Huelsnitz, W; Hulth, P O; Hultqvist, K; In, S; Ishihara, A; Jacobi, E; Japaridze, G S; Jero, K; Jurkovic, M; Kaminsky, B; Kappes, A; Karg, T; Karle, A; Kauer, M; Keivani, A; Kelley, J L; Kemp, J; Kheirandish, A; Kiryluk, J; Kläs, J; Klein, S R; Kohnen, G; Koirala, R; Kolanoski, H; Konietz, R; Koob, A; Köpke, L; Kopper, C; Kopper, S; Koskinen, D J; Kowalski, M; Krings, K; Kroll, G; Kroll, M; Kunnen, J; Kurahashi, N; Kuwabara, T; Labare, M; Lanfranchi, J L; Larson, M J; Lesiak-Bzdak, M; Leuermann, M; Leuner, J; Lünemann, J; Madsen, J; Maggi, G; Mahn, K B M; Maruyama, R; Mase, K; Matis, H S; Maunu, R; McNally, F; Meagher, K; Medici, M; Meli, A; Menne, T; Merino, G; Meures, T; Miarecki, S; Middell, E; Middlemas, E; Miller, J; Mohrmann, L; Montaruli, T; Morse, R; Nahnhauer, R; Naumann, U; Niederhausen, H; Nowicki, S C; Nygren, D R; Obertacke, A; Olivas, A; Omairat, A; O'Murchadha, A; Palczewski, T; Pandya, H; Paul, L; Pepper, J A; Heros, C Pérez de los; Pfendner, C; Pieloth, D; Pinat, E; Posselt, J; Price, P B; Przybylski, G T; Pütz, J; Quinnan, M; Rädel, L; Rameez, M; Rawlins, K; Redl, P; Reimann, R; Relich, M; Resconi, E; Rhode, W; Richman, M; Richter, S; Riedel, B; Robertson, S; Rongen, M; Rott, C; Ruhe, T; Ruzybayev, B; Ryckbosch, D; Saba, S M; Sabbatini, L; Sander, H -G; Sandrock, A; Sandroos, J; Sarkar, S; Schatto, K; Scheriau, F; Schimp, M; Schmidt, T; Schmitz, M; Schoenen, S; Schöneberg, S; Schönwald, A; Schukraft, A; Schulte, L; Seckel, D; Seunarine, S; Shanidze, R; Smith, M W E; Soldin, D; Spiczak, G M; Spiering, C; Stahlberg, M; Stamatikos, M; Stanev, T; Stanisha, N A; Stasik, A; Stezelberger, T; Stokstad, R G; Stößl, A; Strahler, E A; Ström, R; Strotjohann, N L; Sullivan, G W; Sutherland, M; Taavola, H; Taboada, I; Ter-Antonyan, S; Terliuk, A; Tešić, G; Tilav, S; Toale, P A; Tobin, M N; Tosi, D; Tselengidou, M; Unger, E; Usner, M; Vallecorsa, S; van Eijndhoven, N; Vandenbroucke, J; van Santen, J; Vanheule, S; Veenkamp, J; Vehring, M; Voge, M; Vraeghe, M; Walck, C; Wallraff, M; Wandkowsky, N; Weaver, Ch; Wendt, C; Westerhoff, S; Whelan, B J; Whitehorn, N; Wichary, C; Wiebe, K; Wiebusch, C H; Wille, L; Williams, D R; Wissing, H; Wolf, M; Wood, T R; Woschnagg, K; Xu, D L; Xu, X W; Xu, Y; Yanez, J P; Yodh, G; Yoshida, S; Zarzhitsky, P; Zoll, M

    2015-01-01

    The Milky Way is expected to be embedded in a halo of dark matter particles, with the highest density in the central region, and decreasing density with the halo-centric radius. Dark matter might be indirectly detectable at Earth through a flux of stable particles generated in dark matter annihilations and peaked in the direction of the Galactic Center. We present a search for an excess flux of muon (anti-) neutrinos from dark matter annihilation in the Galactic Center using the cubic-kilometer-sized IceCube neutrino detector at the South Pole. There, the Galactic Center is always seen above the horizon. Thus, new and dedicated veto techniques against atmospheric muons are required to make the southern hemisphere accessible for IceCube. We used 319.7 live-days of data from IceCube operating in its 79-string configuration during 2010 and 2011. No neutrino excess was found and the final result is compatible with the background. We present upper limits on the self-annihilation cross-section, $\\left$, for WIMP ma...

  14. Dark Matter Annihilation and Decay Searches with the High Altitude Water Cherenkov (HAWC) Observatory

    CERN Document Server

    Harding, J Patrick

    2015-01-01

    In order to observe annihilation and decay of dark matter, several types of potential sources should be considered. Some sources, such as dwarf galaxies, are expected to have very low astrophysical backgrounds but fairly small dark matter densities. Other sources, like the Galactic center, are expected to have larger densities of dark matter but also have more complicated backgrounds from other astrophysical sources. To search for signatures of dark matter, the large field-of-view of the HAWC detector, covering 2 sr at a time, especially enables searches from sources of dark matter annihilation and decay, which are extended over several degrees on the sky. With a sensitivity over 2/3 of the sky, HAWC has the ability to probe a large fraction of the sky for the signals of TeV-mass dark matter. In particular, HAWC should be the most sensitive experiment to signals coming from dark matter with masses greater than 10-100 TeV. We present the HAWC sensitivity to annihilating and decaying dark matter signals for sev...

  15. Dark Matter with multi-annihilation channels and AMS-02 positron excess and antiproton

    CERN Document Server

    Chen, Yu-Heng; Tseng, Po-Yan

    2015-01-01

    AMS-02 provided the unprecedented statistics in the measurement of the positron fraction from cosmic rays. That may offer a unique opportunity to distinguish the positron spectrum coming from various dark matter (DM) annihilation channels, if DM is the source of this positron excess. Therefore, we consider the scenario that the DM can annihilate into leptonic, quark, and massive gauge boson channels simultaneously with floating branching ratios to test this hypothesis. We also study the impacts from MAX, MED, and MIN diffusion models as well as from isothermal, NFW, and Einasto DM density profiles on our results. We found that under this DM annihilation scenario it is difficult to fit the AMS-02 $\\frac{e^+}{e^++e^-}$ data while evading the PAMELA $\\bar{p}/p$ constraint, except for the combination of MED diffusion model with the Einasto density profile, where the DM mass between 450 GeV to 1.2 TeV can satisfy both data sets at 95\\% CL. Finally, we compare to the newest AMS-02 antiproton data.

  16. 高斯混合粒子PHD滤波被动测角多目标跟踪%Gaussian mixture particle probability hypothesis density based passive bearings-only multi-target tracking

    Institute of Scientific and Technical Information of China (English)

    张俊根; 姬红兵

    2011-01-01

    为解决目标数未知或随时间变化的多日标跟踪问题,通常将多目标状态和脱测数据表示成随机集形式,并通过递推计算目标状态联合分布的概率假设密度(PHD)来完成,然而,对于被动测角的非线性跟踪问题,PHD无法获得闭合解,为此挺出一种新的高斯混合粒了PHD算法,该算法利用高斯混合近似PHD,以避免用聚类确定目标状态,并采用拟蒙特卡罗(QMC)积分方法计算日标状态的预测和更新分布,仿真结果验证了所提出算法的有效性.%When the number of targets is unknown or varies with time, multi-target state and measurements are represented as random sets and the multi-target tracking problem is addressed by calculating the probability hypothesis density(PHD) of the.joint distribution, recorsively. However, PHD can not provide a closed-form solution to the nonlinear problem occurred in the passive bearings-only multi-target tracking system. A new Gaussian mixture particle PHD(GMPPHD) filter is presented in the paper. The PHD is approximated by a mixture of Gaussians, which avoids clustering to determine target states. And Quasi-Monte Carlo integration method is used for approximating the prediction and update distributions of target states.Simulation results show the effectiveness of the proposed algorithm.

  17. Local electron-electron interaction strength in ferromagnetic nickel determined by spin-polarized positron annihilation

    Science.gov (United States)

    Ceeh, Hubert; Weber, Josef Andreass; Böni, Peter; Leitner, Michael; Benea, Diana; Chioncel, Liviu; Ebert, Hubert; Minár, Jan; Vollhardt, Dieter; Hugenschmidt, Christoph

    2016-02-01

    We employ a positron annihilation technique, the spin-polarized two-dimensional angular correlation of annihilation radiation (2D-ACAR), to measure the spin-difference spectra of ferromagnetic nickel. The experimental data are compared with the theoretical results obtained within a combination of the local spin density approximation (LSDA) and the many-body dynamical mean-field theory (DMFT). We find that the self-energy defining the electronic correlations in Ni leads to anisotropic contributions to the momentum distribution. By direct comparison of the theoretical and experimental results we determine the strength of the local electronic interaction U in ferromagnetic Ni as 2.0 ± 0.1 eV.

  18. The study of defects in metallic alloys by positron annihilation spectroscopy

    International Nuclear Information System (INIS)

    Positron annihilation spectroscopy (PAS) has become in a very useful non destructive testing to the study of condensed matter. Specially, in the last two decades, with the advent of solid state detectors and high-resolution time spectrometers. The basic information obtained with PAS in solid-state physics is on electronic structure in free defect materials. However, positron annihilation techniques (lifetime, angular correlation and Doppler broadening) have been succesfully applied to study crystal lattice defects with lower-than-average electron density, such as vacancies, small vacancy clusters, etc.. In this sense, information about: vacancy formation and migration energies, dislocations, grain boundaries, solid-solid phase transformation and radiation damage was obtained. In this work the application of the positron lifetime technique to study the thermal effects on a fine-grained superplastic Al-Ca-Zn alloy and the quenched-in defects in monocrystals of β Cu-Zn-Al alloy for several quenching temperatures is shown. (Author)

  19. Agreeing Probability Measures for Comparative Probability Structures

    OpenAIRE

    Wakker, Peter

    1981-01-01

    It is proved that fine and tight comparative probability structures (where the set of events is assumed to be an algebra, not necessarily a $\\sigma$-algebra) have agreeing probability measures. Although this was often claimed in the literature, all proofs the author encountered are not valid for the general case, but only for $\\sigma$-algebras. Here the proof of Niiniluoto (1972) is supplemented. Furthermore an example is presented that reveals many misunderstandings in the literature. At the...

  20. Nucleon-antinucleon annihilation in chiral soliton model

    International Nuclear Information System (INIS)

    We investigate annihilation process of nucleons in chiral soliton model by path integral method. Soliton-antisoliton pair is shown to decay into pions at range of order of about 1 fm, defined by SS-bar potential. Contribution of annihilation channel into elastic scattering is discussed. (author). 14 refs, 1 fig

  1. CMB constraint on dark matter annihilation after Planck 2015

    Directory of Open Access Journals (Sweden)

    Masahiro Kawasaki

    2016-05-01

    Full Text Available We update the constraint on the dark matter annihilation cross section by using the recent measurements of the CMB anisotropy by the Planck satellite. We fully calculate the cascade of dark matter annihilation products and their effects on ionization, heating and excitation of the hydrogen, hence do not rely on any assumption on the energy fractions that cause these effects.

  2. CMB constraint on dark matter annihilation after Planck 2015

    OpenAIRE

    Masahiro Kawasaki; Kazunori Nakayama; Toyokazu Sekiguchi

    2016-01-01

    We update the constraint on the dark matter annihilation cross section by using the recent measurements of the CMB anisotropy by the Planck satellite. We fully calculate the cascade of dark matter annihilation products and their effects on ionization, heating and excitation of the hydrogen, hence do not rely on any assumption on the energy fractions that cause these effects.

  3. Studies of defects and defect agglomerates by positron annihilation spectroscopy

    DEFF Research Database (Denmark)

    Eldrup, Morten Mostgaard; Singh, B.N.

    1997-01-01

    A brief introduction to positron annihilation spectroscopy (PAS), and in particular lo its use for defect studies in metals is given. Positrons injected into a metal may become trapped in defects such as vacancies, vacancy clusters, voids, bubbles and dislocations and subsequently annihilate from...... advantages of the use of PAS are pointed out. (C) 1997 Elsevier Science B.V....

  4. CMB Constraint on Dark Matter Annihilation after Planck 2015

    CERN Document Server

    Kawasaki, Masahiro; Sekiguchi, Toyokazu

    2015-01-01

    We update the constraint on the dark matter annihilation cross section by using the recent measurements of the CMB anisotropy by the Planck satellite. We fully calculate the cascade of dark matter annihilation products and their effects on ionization, heating and excitation of the hydrogen, hence do not rely on any assumption on the energy fractions that cause these effects.

  5. Cohomology of projective schemes: From annihilators to vanishing

    OpenAIRE

    Chardin, Marc

    2002-01-01

    We provide bounds on the Castelnuovo-Mumford regularity in terms of ``defining equations'' by using elements that annihilates some cohomology modules, inspired by works of Miyazaki, Nagel, Schenzel and Vogel. The elements in these annihilators are provided either by liaison or by tight closure theories. Our results hold in any characteristic.

  6. Nucleon-antinucleon annihilation in chiral soliton model

    International Nuclear Information System (INIS)

    We investigate annihilation process of nucleons in the chiral soliton model by the path integral method. A soliton-antisoliton pair is shown to decay into mesons at range of about 1fm, defined by the S bar S potential. Contribution of the annihilation channel to the elastic scattering is discussed

  7. Direct evidence for positron annihilation from shallow traps

    DEFF Research Database (Denmark)

    Linderoth, Søren; Hidalgo, C.

    1987-01-01

    For deformed Ag the temperature dependence of the positron lifetime parameters is followed between 12 and 300 K. Clear direct evidence for positron trapping and annihilation at shallow traps, with a positron binding energy of 9±2 meV and annihilation characteristics very similar to those in the...

  8. Impact of dark matter decays and annihilations on structure formation

    NARCIS (Netherlands)

    Mapelli, M.; Ripamonti, E.

    2007-01-01

    Abstract: We derived the evolution of the energy deposition in the intergalactic medium (IGM) by different decaying (or annihilating) dark matter (DM) candidates. Heavy annihilating DM particles (with mass larger than a few GeV) have no influence on reionization and heating, even if we assume that a

  9. Neutrino signals from electroweak bremsstrahlung in solar WIMP annihilation

    Energy Technology Data Exchange (ETDEWEB)

    Bell, Nicole F.; Brennan, Amelia J.; Jacques, Thomas D., E-mail: n.bell@unimelb.edu.au, E-mail: a.brennan@pgrad.unimelb.edu.au, E-mail: thomas.jacques@asu.edu [ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, The University of Melbourne, Victoria 3010 (Australia)

    2012-10-01

    Bremsstrahlung of W and Z gauge bosons, or photons, can be an important dark matter annihilation channel. In many popular models in which the annihilation to a pair of light fermions is helicity suppressed, these bremsstrahlung processes can lift the suppression and thus become the dominant annihilation channels. The resulting dark matter annihilation products contain a large, energetic, neutrino component. We consider solar WIMP annihilation in the case where electroweak bremsstrahlung dominates, and calculate the resulting neutrino spectra. The flux consists of primary neutrinos produced in processes such as χχ→ν-bar νZ and χχ→ν-bar lW, and secondary neutrinos produced via the decays of gauge bosons and charged leptons. After dealing with the neutrino propagation and flavour evolution in the Sun, we consider the prospects for detection in neutrino experiments on Earth. We compare our signal with that for annihilation to W{sup +}W{sup −}, and show that, for a given annihilation rate, the bremsstrahlung annihilation channel produces a larger signal by a factor of a few.

  10. CMB constraint on dark matter annihilation after Planck 2015

    Science.gov (United States)

    Kawasaki, Masahiro; Nakayama, Kazunori; Sekiguchi, Toyokazu

    2016-05-01

    We update the constraint on the dark matter annihilation cross section by using the recent measurements of the CMB anisotropy by the Planck satellite. We fully calculate the cascade of dark matter annihilation products and their effects on ionization, heating and excitation of the hydrogen, hence do not rely on any assumption on the energy fractions that cause these effects.

  11. Polymeric coating degradation in accelerated weathering investigated by using positron annihilation spectroscopy

    International Nuclear Information System (INIS)

    Photo-degradation of a polyurethane-based topcoat induced by accelerated Xe-lamp irradiation is studied using Doppler broadened energy spectra (DBES) and positron annihilation lifetime (PAL) spectroscopy coupled with slow positron technique. Significant and similar variations of S-parameter and ortho-positronium intensity (I3) on the coating are observed as functions of depth and of exposure time. Cross-link densities have been measured by the solvent-swelling method. A correlation between the increase of crosslink density and a reduction of free-volume and hole fraction during the degradation is observed. (orig.)

  12. Constraints on dark matter annihilations from diffuse gamma-ray emission in the Galaxy

    International Nuclear Information System (INIS)

    Recent advances in γ-ray cosmic ray, infrared and radio astronomy have allowed us to develop a significantly better understanding of the galactic medium properties in the last few years. In this work using the DRAGON code, that numerically solves the CR propagation equation and calculating γ-ray emissivities in a 2-dimensional grid enclosing the Galaxy, we study in a self consistent manner models for the galactic diffuse γ-ray emission. Our models are cross-checked to both the available CR and γ-ray data. We address the extend to which dark matter annihilations in the Galaxy can contribute to the diffuse γ-ray flux towards different directions on the sky. Moreover we discuss the impact that astrophysical uncertainties of non DM nature, have on the derived γ-ray limits. Such uncertainties are related to the diffusion properties on the Galaxy, the interstellar gas and the interstellar radiation field energy densities. Light ∼ 10 GeV dark matter annihilating dominantly to hadrons is more strongly constrained by γ-ray observations towards the inner parts of the Galaxy and influenced the most by assumptions of the gas distribution; while TeV scale DM annihilating dominantly to leptons has its tightest constraints from observations towards the galactic center avoiding the galactic disk plane, with the main astrophysical uncertainty being the radiation field energy density. In addition, we present a method of deriving constraints on the dark matter distribution profile from the diffuse γ-ray spectra. These results critically depend on the assumed mass of the dark matter particles and the type of its end annihilation products

  13. Non-Archimedean Probability

    CERN Document Server

    Benci, Vieri; Wenmackers, Sylvia

    2011-01-01

    We propose an alternative approach to probability theory closely related to the framework of numerosity theory: non-Archimedean probability (NAP). In our approach, unlike in classical probability theory, all subsets of an infinite sample space are measurable and zero- and unit-probability events pose no particular epistemological problems. We use a non-Archimedean field as the range of the probability function. As a result, the property of countable additivity in Kolmogorov's axiomatization of probability is replaced by a different type of infinite additivity.

  14. Elements of probability theory

    CERN Document Server

    Rumshiskii, L Z

    1965-01-01

    Elements of Probability Theory presents the methods of the theory of probability. This book is divided into seven chapters that discuss the general rule for the multiplication of probabilities, the fundamental properties of the subject matter, and the classical definition of probability. The introductory chapters deal with the functions of random variables; continuous random variables; numerical characteristics of probability distributions; center of the probability distribution of a random variable; definition of the law of large numbers; stability of the sample mean and the method of moments

  15. Evaluating probability forecasts

    CERN Document Server

    Lai, Tze Leung; Shen, David Bo; 10.1214/11-AOS902

    2012-01-01

    Probability forecasts of events are routinely used in climate predictions, in forecasting default probabilities on bank loans or in estimating the probability of a patient's positive response to treatment. Scoring rules have long been used to assess the efficacy of the forecast probabilities after observing the occurrence, or nonoccurrence, of the predicted events. We develop herein a statistical theory for scoring rules and propose an alternative approach to the evaluation of probability forecasts. This approach uses loss functions relating the predicted to the actual probabilities of the events and applies martingale theory to exploit the temporal structure between the forecast and the subsequent occurrence or nonoccurrence of the event.

  16. Joint probability distributions for projection probabilities of random orthonormal states

    International Nuclear Information System (INIS)

    The quantum chaos conjecture applied to a finite dimensional quantum system implies that such a system has eigenstates that show similar statistical properties as the column vectors of random orthogonal or unitary matrices. Here, we consider the different probabilities for obtaining a specific outcome in a projective measurement, provided the system is in one of its eigenstates. We then give analytic expressions for the joint probability density for these probabilities, with respect to the ensemble of random matrices. In the case of the unitary group, our results can be applied, also, to the phenomenon of universal conductance fluctuations, where the same mathematical quantities describe partial conductances in a two-terminal mesoscopic scattering problem with a finite number of modes in each terminal. (paper)

  17. Joint probability distributions for projection probabilities of random orthonormal states

    Science.gov (United States)

    Alonso, L.; Gorin, T.

    2016-04-01

    The quantum chaos conjecture applied to a finite dimensional quantum system implies that such a system has eigenstates that show similar statistical properties as the column vectors of random orthogonal or unitary matrices. Here, we consider the different probabilities for obtaining a specific outcome in a projective measurement, provided the system is in one of its eigenstates. We then give analytic expressions for the joint probability density for these probabilities, with respect to the ensemble of random matrices. In the case of the unitary group, our results can be applied, also, to the phenomenon of universal conductance fluctuations, where the same mathematical quantities describe partial conductances in a two-terminal mesoscopic scattering problem with a finite number of modes in each terminal.

  18. Cumulants in Noncommutative Probability Theory III. Creation- and annihilation operators on Fock spaces

    OpenAIRE

    Lehner, Franz

    2002-01-01

    Fock space constructions give rise to natural exchangeable families and are thus well suited for cumulant calculations. In this paper we develop some general formulas and compute cumulants for generalized Toeplitz operators, notably for q-Fock spaces, previously considered by M. Anshelevich and A. Nica, and Fock spaces for characters of the infinite symmetric group, which where constructed by Bozejko and Guta. An expression for cumulants in terms of the cycle-cover polynomials of certain dire...

  19. Positronium formation and annihilation in BHDC/water/benzene microemulsions

    International Nuclear Information System (INIS)

    Positron lifetime measurements have been made on BHDC/water/benzene microemulsions, at different water to surfactant molar ratios (w0) and different surfactant concentrations (CBHDC). Recently, a model has been proposed to explain the positronium formation and annihilation in AOT/water/isooctane microemulsions, which states that the majority of positronium formation occurs in the aqueous cores of the reverse micelles, and the o-Ps there formed diffuses out of the aqueous cores into the organic solvent bulk. The BHDC based microemulsion system has been chosen in order to study the influence of the cationic head group and of the chloride counterion present in the aqueous pseudophase on the Ps formation probability and on the o-Ps diffusion out of the aqueous cores. The results of this study are satisfactorily explained by the referred model. The radii of the aqueous cores for different w0 values and calculated from the o-Ps lifetime measurements, are in good agreement with the values reported in the literature. The measured o-Ps intensities in both the aqueous pseudophase and the organic phase were found to be in agreement with those calculated by taking into account the partial inhibition due to the chloride counterions present in the aqueous cores. (orig.)

  20. State-selective high-energy excitation of nuclei by resonant positron annihilation

    Directory of Open Access Journals (Sweden)

    Nikolay A. Belov

    2015-02-01

    Full Text Available In the annihilation of a positron with a bound atomic electron, the virtual γ photon created may excite the atomic nucleus. We put forward this effect as a spectroscopic tool for an energy-selective excitation of nuclear transitions. This scheme can efficiently populate nuclear levels of arbitrary multipolarities in the MeV regime, including giant resonances and monopole transitions. In certain cases, it may have higher cross sections than the conventionally used Coulomb excitation and it can even occur with high probability when the latter is energetically forbidden.

  1. State-selective high-energy excitation of nuclei by resonant positron annihilation

    Energy Technology Data Exchange (ETDEWEB)

    Belov, Nikolay A., E-mail: belov@mpi-hd.mpg.de; Harman, Zoltán

    2015-02-04

    In the annihilation of a positron with a bound atomic electron, the virtual γ photon created may excite the atomic nucleus. We put forward this effect as a spectroscopic tool for an energy-selective excitation of nuclear transitions. This scheme can efficiently populate nuclear levels of arbitrary multipolarities in the MeV regime, including giant resonances and monopole transitions. In certain cases, it may have higher cross sections than the conventionally used Coulomb excitation and it can even occur with high probability when the latter is energetically forbidden.

  2. A reanalysis of B0-anti B0 mixing in e+e- annihilation at 29 GeV

    International Nuclear Information System (INIS)

    Data taken by the Mark II detector at the PEP storage ring was used to measure the rate of dilepton production in multihadronic events produced by e+e- annihilation at √s=29 GeV. We determine the probability that a hadron initially containing a b (anti b) quark decays to a positive (negative) lepton to be 0.17-0.08+0.15, with 90% confidence level limits of 0.06 and 0.38. (orig.)

  3. Estimating extreme flood probabilities

    International Nuclear Information System (INIS)

    Estimates of the exceedance probabilities of extreme floods are needed for the assessment of flood hazard at Department of Energy facilities. A new approach using a joint probability distribution of extreme rainfalls and antecedent soil moisture conditions, along with a rainfall runoff model, provides estimates of probabilities for floods approaching the probable maximum flood. This approach is illustrated for a 570 km2 catchment in Wisconsin and a 260 km2 catchment in Tennessee

  4. Introduction to probability

    CERN Document Server

    Roussas, George G

    2006-01-01

    Roussas's Introduction to Probability features exceptionally clear explanations of the mathematics of probability theory and explores its diverse applications through numerous interesting and motivational examples. It provides a thorough introduction to the subject for professionals and advanced students taking their first course in probability. The content is based on the introductory chapters of Roussas's book, An Intoduction to Probability and Statistical Inference, with additional chapters and revisions. Written by a well-respected author known for great exposition an

  5. Probability on real Lie algebras

    CERN Document Server

    Franz, Uwe

    2016-01-01

    This monograph is a progressive introduction to non-commutativity in probability theory, summarizing and synthesizing recent results about classical and quantum stochastic processes on Lie algebras. In the early chapters, focus is placed on concrete examples of the links between algebraic relations and the moments of probability distributions. The subsequent chapters are more advanced and deal with Wigner densities for non-commutative couples of random variables, non-commutative stochastic processes with independent increments (quantum Lévy processes), and the quantum Malliavin calculus. This book will appeal to advanced undergraduate and graduate students interested in the relations between algebra, probability, and quantum theory. It also addresses a more advanced audience by covering other topics related to non-commutativity in stochastic calculus, Lévy processes, and the Malliavin calculus.

  6. Dependent Probability Spaces

    Science.gov (United States)

    Edwards, William F.; Shiflett, Ray C.; Shultz, Harris

    2008-01-01

    The mathematical model used to describe independence between two events in probability has a non-intuitive consequence called dependent spaces. The paper begins with a very brief history of the development of probability, then defines dependent spaces, and reviews what is known about finite spaces with uniform probability. The study of finite…

  7. Interpretations of probability

    CERN Document Server

    Khrennikov, Andrei

    2009-01-01

    This is the first fundamental book devoted to non-Kolmogorov probability models. It provides a mathematical theory of negative probabilities, with numerous applications to quantum physics, information theory, complexity, biology and psychology. The book also presents an interesting model of cognitive information reality with flows of information probabilities, describing the process of thinking, social, and psychological phenomena.

  8. Raman Cooling of Solids through Photonic Density of States Engineering

    CERN Document Server

    Chen, Yin-Chung

    2015-01-01

    The laser cooling of vibrational states of solids has been achieved through photoluminescence in rare-earth elements, optical forces in optomechanics, and the Brillouin scattering light-sound interaction. The net cooling of solids through spontaneous Raman scattering, and laser refrigeration of indirect band gap semiconductors, both remain unsolved challenges. Here, we analytically show that photonic density of states (DoS) engineering can address the two fundamental requirements for achieving spontaneous Raman cooling: suppressing the dominance of Stokes (heating) transitions, and the enhancement of anti-Stokes (cooling) efficiency beyond the natural optical absorption of the material. We develop a general model for the DoS modification to spontaneous Raman scattering probabilities, and elucidate the necessary and minimum condition required for achieving net Raman cooling. With a suitably engineered DoS, we establish the enticing possibility of refrigeration of intrinsic silicon by annihilating phonons from ...

  9. Laboratory-Tutorial activities for teaching probability

    CERN Document Server

    Wittmann, M C; Morgan, J T; Feeley, Roger E.; Morgan, Jeffrey T.; Wittmann, Michael C.

    2006-01-01

    We report on the development of students' ideas of probability and probability density in a University of Maine laboratory-based general education physics course called Intuitive Quantum Physics. Students in the course are generally math phobic with unfavorable expectations about the nature of physics and their ability to do it. We describe a set of activities used to teach concepts of probability and probability density. Rudimentary knowledge of mechanics is needed for one activity, but otherwise the material requires no additional preparation. Extensions of the activities include relating probability density to potential energy graphs for certain "touchstone" examples. Students have difficulties learning the target concepts, such as comparing the ratio of time in a region to total time in all regions. Instead, they often focus on edge effects, pattern match to previously studied situations, reason about necessary but incomplete macroscopic elements of the system, use the gambler's fallacy, and use expectati...

  10. Philosophy and probability

    CERN Document Server

    Childers, Timothy

    2013-01-01

    Probability is increasingly important for our understanding of the world. What is probability? How do we model it, and how do we use it? Timothy Childers presents a lively introduction to the foundations of probability and to philosophical issues it raises. He keeps technicalities to a minimum, and assumes no prior knowledge of the subject. He explains the main interpretations of probability-frequentist, propensity, classical, Bayesian, and objective Bayesian-and uses stimulatingexamples to bring the subject to life. All students of philosophy will benefit from an understanding of probability,

  11. Introduction to probability models

    CERN Document Server

    Ross, Sheldon M

    2006-01-01

    Introduction to Probability Models, Tenth Edition, provides an introduction to elementary probability theory and stochastic processes. There are two approaches to the study of probability theory. One is heuristic and nonrigorous, and attempts to develop in students an intuitive feel for the subject that enables him or her to think probabilistically. The other approach attempts a rigorous development of probability by using the tools of measure theory. The first approach is employed in this text. The book begins by introducing basic concepts of probability theory, such as the random v

  12. Probability and Quantum Paradigms: the Interplay

    International Nuclear Information System (INIS)

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a few details, this variant is appealing in its reliance on well tested concepts and technology

  13. Probability and Quantum Paradigms: the Interplay

    Science.gov (United States)

    Kracklauer, A. F.

    2007-12-01

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a few details, this variant is appealing in its reliance on well tested concepts and technology.

  14. Current-induced spin polarization on a Pt surface: A new approach using spin-polarized positron annihilation spectroscopy

    International Nuclear Information System (INIS)

    Transversely spin-polarized positrons were injected near Pt and Au surfaces under an applied electric current. The three-photon annihilation of spin-triplet positronium, which was emitted from the surfaces into vacuum, was observed. When the positron spin polarization was perpendicular to the current direction, the maximum asymmetry of the three-photon annihilation intensity was observed upon current reversal for the Pt surfaces, whereas it was significantly reduced for the Au surface. The experimental results suggest that electrons near the Pt surfaces were in-plane and transversely spin-polarized with respect to the direction of the electric current. The maximum electron spin polarization was estimated to be more than 0.01 (1%). - Highlights: • Annihilation probability of positronium emitted from the Pt surface into the vacuum under direct current exhibited asymmetry upon current reversal. • The maximum asymmetry appeared when positron spin polarization and the direct current were perpendicular to each other. • Electrons near the Pt surfaces were in-plane and transversely spin-polarized with respect to the direction of the electric current. • Spin-polarized positronium annihilation provides a unique tool for investigating spin polarization on metal surfaces

  15. Simulation of the annihilation emission of galactic positrons

    International Nuclear Information System (INIS)

    Positrons annihilate in the central region of our Galaxy. This has been known since the detection of a strong emission line centered on an energy of 511 keV in the direction of the Galactic center. This gamma-ray line is emitted during the annihilation of positrons with electrons from the interstellar medium. The spectrometer SPI, onboard the INTEGRAL observatory, performed spatial and spectral analyses of the positron annihilation emission. This thesis presents a study of the Galactic positron annihilation emission based on models of the different interactions undergone by positrons in the interstellar medium. The models are relied on our present knowledge of the properties of the interstellar medium in the Galactic bulge, where most of the positrons annihilate, and of the physics of positrons (production, propagation and annihilation processes). In order to obtain constraints on the positrons sources and physical characteristics of the annihilation medium, we compared the results of the models to measurements provided by the SPI spectrometer. (author)

  16. On the Sunyaev-Zel'dovich effect from dark matter annihilation or decay in galaxy clusters

    CERN Document Server

    Lavalle, Julien; Barthes, Julien

    2009-01-01

    We revisit the prospects for detecting the Sunyaev Zel'dovich (SZ) effect induced by dark matter (DM) annihilation or decay. We show that with standard (or even extreme) assumptions for properties of the DM particles and the DM halo profile, the optical depth associated with the relativistic electrons injected from DM annihilation or decay is much smaller than that associated with the thermal electrons, when averaged over the angular resolution of current and future experiments. For example we find: $\\tau_{\\rm DM} \\sim 10^{-7}-10^{-6}$ for $\\mchi = 1$ GeV and a density profile $\\rho\\propto r^{-1}$ for a template cluster located at 50 Mpc and observed within an angular resolution of $10"$, compared to $\\tau_{\\rm th}\\sim 10^{-3}-10^{-2}$. This, together with a full spectral analysis, enables us to demonstrate that, for a template cluster with properties close to those of the nearby ones, the SZ effect due to DM annihilation or decay is far below the sensitivity of the Planck satellite. This is at variance with ...

  17. A new scalar mediated WIMPs with pairs of on-shell mediators in annihilations

    CERN Document Server

    Jia, Lian-Bao

    2016-01-01

    In this article, we focus on a new scalar $\\phi$ mediated scalar/vectoral WIMPs (weakly interacting massive particles). To explain the Galactic center 1 - 3 GeV gamma-ray excess, here we consider the case that a WIMP pair predominantly annihilates into an on-shell $\\phi \\phi$ pair which mainly decays to $\\tau \\bar{\\tau}$, with masses of WIMPs in a range about 14 - 22 GeV. For the mass of $\\phi$ slightly below the WIMP mass, the annihilations of WIMPs are phase space suppressed today, and the required thermally averaged annihilation cross section of WIMPs can be derived to meet the GeV gamma-ray excess. A small scalar mediator-Higgs field mixing is introduced, which is available in interpreting the GeV gamma-ray excess. With the constraints of the dark matter relic density, the indirect detection result, the collider experiment, the thermal equilibrium of the early universe and the dark matter direct detection experiment are considered, we find there are parameter spaces left. The WIMPs may be detectable at th...

  18. Multi-photon creation and single-photon annihilation of electron-positron pairs

    International Nuclear Information System (INIS)

    In this thesis we study multi-photon e+e- pair production in a trident process, and singlephoton e+e- pair annihilation in a triple interaction. The pair production is considered in the collision of a relativistic electron with a strong laser beam, and calculated within the theory of laser-dressed quantum electrodynamics. A regularization method is developed systematically for the resonance problem arising in the multi-photon process. Total production rates, positron spectra, and relative contributions of different reaction channels are obtained in various interaction regimes. Our calculation shows good agreement with existing experimental data from SLAC, and adds further insights into the experimental findings. Besides, we study the process in a manifestly nonperturbative domain, whose accessibility to future all-optical experiments based on laser acceleration is shown. In the single-photon e+e- pair annihilation, the recoil momentum is absorbed by a spectator particle. Various kinematic configurations of the three incoming particles are examined. Under certain conditions, the emitted photon exhibits distinct angular and polarization distributions which could facilitate the detection of the process. Considering an equilibrium relativistic e+e- plasma, it is found that the single-photon process becomes the dominant annihilation channel for plasma temperatures above 3 MeV. Multi-particle correlation effects are therefore essential for the e+e- dynamics at very high density. (orig.)

  19. Multi-photon creation and single-photon annihilation of electron-positron pairs

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Huayu

    2011-04-27

    In this thesis we study multi-photon e{sup +}e{sup -} pair production in a trident process, and singlephoton e{sup +}e{sup -} pair annihilation in a triple interaction. The pair production is considered in the collision of a relativistic electron with a strong laser beam, and calculated within the theory of laser-dressed quantum electrodynamics. A regularization method is developed systematically for the resonance problem arising in the multi-photon process. Total production rates, positron spectra, and relative contributions of different reaction channels are obtained in various interaction regimes. Our calculation shows good agreement with existing experimental data from SLAC, and adds further insights into the experimental findings. Besides, we study the process in a manifestly nonperturbative domain, whose accessibility to future all-optical experiments based on laser acceleration is shown. In the single-photon e{sup +}e{sup -} pair annihilation, the recoil momentum is absorbed by a spectator particle. Various kinematic configurations of the three incoming particles are examined. Under certain conditions, the emitted photon exhibits distinct angular and polarization distributions which could facilitate the detection of the process. Considering an equilibrium relativistic e{sup +}e{sup -} plasma, it is found that the single-photon process becomes the dominant annihilation channel for plasma temperatures above 3 MeV. Multi-particle correlation effects are therefore essential for the e{sup +}e{sup -} dynamics at very high density. (orig.)

  20. Optical and microstructural characterization of porous silicon using photoluminescence, SEM and positron annihilation spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, C K [Department of Physics, University of Hong Kong, Pokfulam Road, Hong Kong (China); Nahid, F [Department of Physics, University of Hong Kong, Pokfulam Road, Hong Kong (China); Cheng, C C [Department of Physics, University of Hong Kong, Pokfulam Road, Hong Kong (China); Beling, C D [Department of Physics, University of Hong Kong, Pokfulam Road, Hong Kong (China); Fung, S [Department of Physics, University of Hong Kong, Pokfulam Road, Hong Kong (China); Ling, C C [Department of Physics, University of Hong Kong, Pokfulam Road, Hong Kong (China); Djurisic, A B [Department of Physics, University of Hong Kong, Pokfulam Road, Hong Kong (China); Pramanik, C [Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata 700032 (India); Saha, H [Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata 700032 (India); Sarkar, C K [Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata 700032 (India)

    2007-12-05

    We have studied the dependence of porous silicon morphology and porosity on fabrication conditions. N-type (100) silicon wafers with resistivity of 2-5 {omega} cm were electrochemically etched at various current densities and anodization times. Surface morphology and the thickness of the samples were examined by scanning electron microscopy (SEM). Detailed information of the porous silicon layer morphology with variation of preparation conditions was obtained by positron annihilation spectroscopy (PAS): the depth-defect profile and open pore interconnectivity on the sample surface has been studied using a slow positron beam. Coincidence Doppler broadening spectroscopy (CDBS) was used to study the chemical environment of the samples. The presence of silicon micropores with diameter varying from 1.37 to 1.51 nm was determined by positron lifetime spectroscopy (PALS). Visible luminescence from the samples was observed, which is considered to be a combination effect of quantum confinement and the effect of Si = O double bond formation near the SiO{sub 2}/Si interface according to the results from photoluminescence (PL) and positron annihilation spectroscopy measurements. The work shows that the study of the positronium formed when a positron is implanted into the porous surface provides valuable information on the pore distribution and open pore interconnectivity, which suggests that positron annihilation spectroscopy is a useful tool in the porous silicon micropores' characterization.

  1. Lagrangian Probability Distributions of Turbulent Flows

    OpenAIRE

    Friedrich, R.

    2002-01-01

    We outline a statistical theory of turbulence based on the Lagrangian formulation of fluid motion. We derive a hierarchy of evolution equations for Lagrangian N-point probability distributions as well as a functional equation for a suitably defined probability functional which is the analog of Hopf's functional equation. Furthermore, we adress the derivation of a generalized Fokker-Plank equation for the joint velocity - position probability density of N fluid particles.

  2. Probability of Failure in Random Vibration

    DEFF Research Database (Denmark)

    Nielsen, Søren R.K.; Sørensen, John Dalsgaard

    1988-01-01

    Close approximations to the first-passage probability of failure in random vibration can be obtained by integral equation methods. A simple relation exists between the first-passage probability density function and the distribution function for the time interval spent below a barrier before out-c...... well as for bimodal processes with two dominating frequencies in the structural response....

  3. Recent Developments in Applied Probability and Statistics

    CERN Document Server

    Devroye, Luc; Kohler, Michael; Korn, Ralf

    2010-01-01

    This book presents surveys on recent developments in applied probability and statistics. The contributions include topics such as nonparametric regression and density estimation, option pricing, probabilistic methods for multivariate interpolation, robust graphical modelling and stochastic differential equations. Due to its broad coverage of different topics the book offers an excellent overview of recent developments in applied probability and statistics.

  4. Constraints on dark matter annihilation in clusters of galaxies with the Fermi large area telescope

    International Nuclear Information System (INIS)

    Nearby clusters and groups of galaxies are potentially bright sources of high-energy gamma-ray emission resulting from the pair-annihilation of dark matter particles. However, no significant gamma-ray emission has been detected so far from clusters in the first 11 months of observations with the Fermi Large Area Telescope. We interpret this non-detection in terms of constraints on dark matter particle properties. In particular for leptonic annihilation final states and particle masses greater than ∼ 200 GeV, gamma-ray emission from inverse Compton scattering of CMB photons is expected to dominate the dark matter annihilation signal from clusters, and our gamma-ray limits exclude large regions of the parameter space that would give a good fit to the recent anomalous Pamela and Fermi-LAT electron-positron measurements. We also present constraints on the annihilation of more standard dark matter candidates, such as the lightest neutralino of supersymmetric models. The constraints are particularly strong when including the fact that clusters are known to contain substructure at least on galaxy scales, increasing the expected gamma-ray flux by a factor of ∼ 5 over a smooth-halo assumption. We also explore the effect of uncertainties in cluster dark matter density profiles, finding a systematic uncertainty in the constraints of roughly a factor of two, but similar overall conclusions. In this work, we focus on deriving limits on dark matter models; a more general consideration of the Fermi-LAT data on clusters and clusters as gamma-ray sources is forthcoming

  5. Minimal semi-annihilating ZN scalar dark matter

    International Nuclear Information System (INIS)

    We study the dark matter from an inert doublet and a complex scalar singlet stabilized by ZN symmetries. This field content is the minimal one that allows dimensionless semi-annihilation couplings for N > 2. We consider explicitly the Z3 and Z4 cases and take into account constraints from perturbativity, unitarity, vacuum stability, necessity for the electroweak ZN preserving vacuum to be the global minimum, electroweak precision tests, upper limits from direct detection and properties of the Higgs boson. Co-annihilation and semi-annihilation of dark sector particles as well as dark matter conversion significantly modify the cosmic abundance and direct detection phenomenology

  6. Minimal semi-annihilating ℤN scalar dark matter

    International Nuclear Information System (INIS)

    We study the dark matter from an inert doublet and a complex scalar singlet stabilized by ℤN symmetries. This field content is the minimal one that allows dimensionless semi-annihilation couplings for N>2. We consider explicitly the ℤ3 and ℤ4 cases and take into account constraints from perturbativity, unitarity, vacuum stability, necessity for the electroweak ℤN preserving vacuum to be the global minimum, electroweak precision tests, upper limits from direct detection and properties of the Higgs boson. Co-annihilation and semi-annihilation of dark sector particles as well as dark matter conversion significantly modify the cosmic abundance and direct detection phenomenology

  7. Determination of the 3\\gamma fraction from positron annihilation in mesoporous materials for symmetry violation experiment with J-PET scanner

    CERN Document Server

    Jasińska, B; Wiertel, M; Zaleski, R; Alfs, D; Bednarski, T; Białas, P; Czerwiński, E; Dulski, K; Gajos, A; Głowacz, B; Kamińska, D; Kapłon, Ł; Korcyl, G; Kowalski, P; Kozik, T; Krzemień, W; Kubicz, E; Mohammed, M; Niedźwiecki, Sz; Pałka, M; Raczyński, L; Rudy, Z; Rundel, O; Sharma, N G; Silarski, M; Słomski, A; Strzelecki, A; Wieczorek, A; Wiślicki, W; Zgardzińska, B; Zieliński, M; Moskal, P

    2016-01-01

    Various mesoporous materials were investigated to choose the best material for experiments requiring high yield of long-lived positronium. We found that the fraction of 3\\gamma annihilation determined using \\gamma-ray energy spectra and positron annihilation lifetime spectra (PAL) changed from 20% to 25%. The 3gamma fraction and o-Ps formation probability in the polymer XAD-4 is found to be the largest. Elemental analysis performed using scanning electron microscop (SEM) equipped with energy-dispersive X-ray spectroscop EDS show high purity of the investigated materials.

  8. Antiproton annihilation physics in the Monte Carlo particle transport code SHIELD-HIT12A

    International Nuclear Information System (INIS)

    The Monte Carlo particle transport code SHIELD-HIT12A is designed to simulate therapeutic beams for cancer radiotherapy with fast ions. SHIELD-HIT12A allows creation of antiproton beam kernels for the treatment planning system TRiP98, but first it must be benchmarked against experimental data. An experimental depth dose curve obtained by the AD-4/ACE collaboration was compared with an earlier version of SHIELD-HIT, but since then inelastic annihilation cross sections for antiprotons have been updated and a more detailed geometric model of the AD-4/ACE experiment was applied. Furthermore, the Fermi–Teller Z-law, which is implemented by default in SHIELD-HIT12A has been shown not to be a good approximation for the capture probability of negative projectiles by nuclei. We investigate other theories which have been developed, and give a better agreement with experimental findings. The consequence of these updates is tested by comparing simulated data with the antiproton depth dose curve in water. It is found that the implementation of these new capture probabilities results in an overestimation of the depth dose curve in the Bragg peak. This can be mitigated by scaling the antiproton collision cross sections, which restores the agreement, but some small deviations still remain. Best agreement is achieved by using the most recent antiproton collision cross sections and the Fermi–Teller Z-law, even if experimental data conclude that the Z-law is inadequately describing annihilation on compounds. We conclude that more experimental cross section data are needed in the lower energy range in order to resolve this contradiction, ideally combined with more rigorous models for annihilation on compounds

  9. Did something decay, evaporate, or annihilate during big bang nucleosynthesis?

    International Nuclear Information System (INIS)

    Results of a detailed examination of the cascade nucleosynthesis resulting from the putative hadronic decay, evaporation, or annihilation of a primordial relic during the big bang nucleosynthesis (BBN) era are presented. It is found that injection of energetic nucleons around cosmic time 103 s may lead to an observationally favored reduction of the primordial 7Li/H yield by a factor 2-3. Moreover, such sources also generically predict the production of the 6Li isotope with magnitude close to the as yet unexplained high 6Li abundances in low-metallicity stars. The simplest of these models operates at a fractional contribution to the baryon density Ωbh2 > or approx. 0.025, slightly larger than that inferred from standard BBN. Though further study is required, such sources, as, for example, due to the decay of the next-to-lightest supersymmetric particle into GeV gravitinos or the decay of an unstable gravitino in the TeV range of abundance ΩGh2∼5x10-4 show promise to explain both the 6Li and 7Li abundances in low-metallicity stars

  10. Positron annihilation and magnetic properties studies of copper substituted nickel ferrite nanoparticles

    Science.gov (United States)

    Kargar, Z.; Asgarian, S. M.; Mozaffari, M.

    2016-05-01

    Single phase copper substituted nickel ferrite Ni1-xCuxFe2O4 (x = 0.0, 0.1, 0.3 and 0.5) nanoparticles were synthesized by the sol-gel method. TEM images of the samples confirm formation of nano-sized particles. The Rietveld refinement of the X-ray diffraction patterns showed that lattice constant increase with increase in copper content from 8.331 for x = 0.0 to 8.355 Å in x = 0.5. Cation distribution of samples has been determined by the occupancy factor, using Rietveld refinement. The positron lifetime spectra of the samples were convoluted into three lifetime components. The shortest lifetime is due to the positrons that do not get trapped by the vacancy defects. The second lifetime is ascribed to annihilation of positrons in tetrahedral (A) and octahedral (B) sites in spinel structure. It is seen that for x = 0.1 and 0.3 samples, positron trapped within vacancies in A sites, but for x = 0.0 and 0.5, the positrons trapped and annihilated within occupied B sites. The longest lifetime component attributed to annihilation of positrons in the free volume between nanoparticles. The obtained results from coincidence Doppler broadening spectroscopy (CDBS) confirmed the results of positron annihilation lifetime spectroscopy (PALS) and also showed that the vacancy clusters concentration for x = 0.3 is more than those in other samples. Average defect density in the samples, determined from mean lifetime of annihilated positrons reflects that the vacancy concentration for x = 0.3 is maximum. The magnetic measurements showed that the saturation magnetization for x = 0.3 is maximum that can be explained by Néel's theory. The coercivity in nanoparticles increased with increase in copper content. This increase is ascribed to the change in anisotropy constant because of increase of the average defect density due to the substitution of Cu2+ cations and magnetocrystalline anisotropy of Cu2+ cations. Curie temperature of the samples reduces with increase in copper content which

  11. Choice Probability Generating Functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel L; Bierlaire, Michel

    This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications....

  12. Handbook of probability

    CERN Document Server

    Florescu, Ionut

    2013-01-01

    THE COMPLETE COLLECTION NECESSARY FOR A CONCRETE UNDERSTANDING OF PROBABILITY Written in a clear, accessible, and comprehensive manner, the Handbook of Probability presents the fundamentals of probability with an emphasis on the balance of theory, application, and methodology. Utilizing basic examples throughout, the handbook expertly transitions between concepts and practice to allow readers an inclusive introduction to the field of probability. The book provides a useful format with self-contained chapters, allowing the reader easy and quick reference. Each chapter includes an introductio

  13. Choice probability generating functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel

    2013-01-01

    This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications. The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended to...

  14. Real analysis and probability

    CERN Document Server

    Ash, Robert B; Lukacs, E

    1972-01-01

    Real Analysis and Probability provides the background in real analysis needed for the study of probability. Topics covered range from measure and integration theory to functional analysis and basic concepts of probability. The interplay between measure theory and topology is also discussed, along with conditional probability and expectation, the central limit theorem, and strong laws of large numbers with respect to martingale theory.Comprised of eight chapters, this volume begins with an overview of the basic concepts of the theory of measure and integration, followed by a presentation of var

  15. Qubit persistence probability

    International Nuclear Information System (INIS)

    In this work, I formulate the persistence probability for a qubit device as the probability of measuring its computational degrees of freedom in the unperturbed state without the decoherence arising from environmental interactions. A decoherence time can be obtained from the persistence probability. Drawing on recent work of Garg, and also Palma, Suomine, and Ekert, I apply the persistence probability formalism to a generic single-qubit device coupled to a thermal environment, and also apply it to a trapped-ion quantum register coupled to the ion vibrational modes. (author)

  16. The effect of the nuclear Coulomb field on atomic ionization at positron-electron annihilation in β+- decay

    Directory of Open Access Journals (Sweden)

    Fedotkin Sergey

    2015-01-01

    Full Text Available We consider the process of the annihilation of a positron emitted at β+- decay and a K-electron of the daughter atom. A part of energy during this process is passed to another K- electron and it leaves the atom. The influence of the Coulomb field on the positron and the ejected electron is considered. It was calculated the probability of this process for an atom with arbitrary Z is calculated. For the nucleus Ti the effect of the Coulomb field essentially increases the probability of the considered process.

  17. The effect of the nuclear Coulomb field on atomic ionization at positron-electron annihilation in β+- decay

    OpenAIRE

    Fedotkin Sergey

    2015-01-01

    We consider the process of the annihilation of a positron emitted at β+- decay and a K-electron of the daughter atom. A part of energy during this process is passed to another K- electron and it leaves the atom. The influence of the Coulomb field on the positron and the ejected electron is considered. It was calculated the probability of this process for an atom with arbitrary Z is calculated. For the nucleus Ti the effect of the Coulomb field essentially increases the probability of the cons...

  18. Gamma-ray constraints on dark-matter annihilation to electroweak gauge and Higgs bosons

    International Nuclear Information System (INIS)

    Dark-matter annihilation into electroweak gauge and Higgs bosons results in γ-ray emission. We use observational upper limits on the fluxes of both line and continuum γ-rays from the Milky Way Galactic Center and from Milky Way dwarf companion galaxies to set exclusion limits on allowed dark-matter masses. (Generally, Galactic Center γ-ray line search limits from the Fermi-LAT and the H.E.S.S. experiments are most restrictive.) Our limits apply under the following assumptions: a) the dark matter species is a cold thermal relic with present mass density equal to the measured dark-matter density of the universe; b) dark-matter annihilation to standard-model particles is described in the non-relativistic limit by a single effective operator O∝JDM⋅JSM, where JDM is a standard-model singlet current consisting of dark-matter fields (Dirac fermions or complex scalars), and JSM is a standard-model singlet current consisting of electroweak gauge and Higgs bosons; and c) the dark-matter mass is in the range 5 GeV to 20 TeV. We consider, in turn, the 34 possible operators with mass dimension 8 or lower with non-zero s-wave annihilation channels satisfying the above assumptions. Our limits are presented in a large number of figures, one for each of the 34 possible operators; these limits can be grouped into 13 classes determined by the field content and structure of the operators. We also identify three classes of operators (coupling to the Higgs and SU(2)L gauge bosons) that can supply a 130 GeV line with the desired strength to fit the putative line signal in the Fermi-LAT data, while saturating the relic density and satisfying all other indirect constraints we consider

  19. Positron-molecule interactions: resonant attachment, annihilation, and bound states

    CERN Document Server

    Gribakin, G F; Surko, C M; 10.1103/RevModPhys.82.2557

    2010-01-01

    This article presents an overview of current understanding of the interaction of low-energy positrons with molecules with emphasis on resonances, positron attachment and annihilation. Annihilation rates measured as a function of positron energy reveal the presence of vibrational Feshbach resonances (VFR) for many polyatomic molecules. These resonances lead to strong enhancement of the annihilation rates. They also provide evidence that positrons bind to many molecular species. A quantitative theory of VFR-mediated attachment to small molecules is presented. It is tested successfully for selected molecules (e.g., methyl halides and methanol) where all modes couple to the positron continuum. Combination and overtone resonances are observed and their role is elucidated. In larger molecules, annihilation rates from VFR far exceed those explicable on the basis of single-mode resonances. These enhancements increase rapidly with the number of vibrational degrees of freedom. While the details are as yet unclear, intr...

  20. Baryon production in $e^{+}e^{-}$-annihilation at PETRA

    CERN Document Server

    Bartel, Wulfrin; Dittmann, P; Eichler, R; Felst, R; Haidt, Dieter; Krehbiel, H; Meier, K; Naroska, Beate; O'Neill, L H; Steffen, P; Wenninger, Horst; Zhang, Y; Elsen, E E; Helm, M; Petersen, A; Warming, P; Weber, G; Bethke, Siegfried; Drumm, H; Heintze, J; Heinzelmann, G; Hellenbrand, K H; Heuer, R D; Von Krogh, J; Lennert, P; Kawabata, S; Matsumura, H; Nozaki, T; Olsson, J; Rieseberg, H; Wagner, A; Bell, A; Foster, F; Hughes, G; Wriedt, H; Allison, J; Ball, A H; Bamford, G; Barlow, R; Bowdery, C K; Duerdoth, I P; Hassard, J F; King, B T; Loebinger, F K; MacBeth, A A; McCann, H; Mills, H E; Murphy, P G; Prosper, H B; Stephens, K; Clarke, D; Goddard, M C; Marshall, R; Pearce, G F; Kobayashi, T; Komamiya, S; Koshiba, M; Minowa, M; Nozaki, M; Orito, S; Sato, A; Suda, T; Takeda, H; Totsuka, Y; Watanabe, Y; Yamada, S; Yanagisawa, C

    1981-01-01

    Data on p and Lambda production by e/sup +/e/sup -/-annihilation at CM energies between 30 and 36 GeV are presented. Indication for an angular anticorrelation in events with baryon-antibaryon pairs is seen.

  1. Coincidence Doppler Broadening of Positron Annihilation Radiation in Fe

    Science.gov (United States)

    do Nascimento, E.; Vanin, V. R.; Maidana, N. L.; Helene, O.

    2013-06-01

    We measured the Doppler broadening annihilation radiation spectrum in Fe, using 22NaCl as a positron source, and two Ge detectors in coincidence arrangement. The two-dimensional coincidence energy spectrum was fitted using a model function that included positron annihilation with the conduction band and 3d electrons, 3s and 3p electrons, and in-flight positron annihilation. Detectors response functions included backscattering and a combination of Compton and pulse pileup, ballistic deficit and shaping effects. The core electrons annihilation intensity was measured as 16.4(3) %, with almost all the remainder assigned to the less bound electrons. The obtained results are in agreement with published theoretical values.

  2. Ultrahigh energy cosmic rays from dark matter annihilation

    OpenAIRE

    Dick, R.; P. Blasi(INAF Arcetri); Kolb, E. W.

    2002-01-01

    Annihilation of clumped superheavy dark matter provides an interesting explanation for the origin of ultrahigh energy cosmic rays. The predicted anisotropy signal provides a unique signature for this scenario.

  3. Baryon production in e+e--annihilation at PETRA

    International Nuclear Information System (INIS)

    Data on anti p and anti Λ production by e+e--annihilation at CM energies between 30 and 36 GeV are presented. Indication for an angular anticorrelation in events with baryon antibaryon pairs is seen. (orig.)

  4. Introduction to probability

    CERN Document Server

    Freund, John E

    1993-01-01

    Thorough, lucid coverage of permutations and factorials, probabilities and odds, frequency interpretation, mathematical expectation, decision making, postulates of probability, rule of elimination, binomial distribution, geometric distribution, standard deviation, law of large numbers, and much more. Exercises with some solutions. Summary. Bibliography. Includes 42 black-and-white illustrations. 1973 edition.

  5. On Quantum Conditional Probability

    Directory of Open Access Journals (Sweden)

    Isabel Guerra Bobo

    2013-02-01

    Full Text Available We argue that quantum theory does not allow for a generalization of the notion of classical conditional probability by showing that the probability defined by the Lüders rule, standardly interpreted in the literature as the quantum-mechanical conditionalization rule, cannot be interpreted as such.

  6. Topics in probability

    CERN Document Server

    Prabhu, Narahari

    2011-01-01

    Recent research in probability has been concerned with applications such as data mining and finance models. Some aspects of the foundations of probability theory have receded into the background. Yet, these aspects are very important and have to be brought back into prominence.

  7. Probability, Nondeterminism and Concurrency

    DEFF Research Database (Denmark)

    Varacca, Daniele

    Nondeterminism is modelled in domain theory by the notion of a powerdomain, while probability is modelled by that of the probabilistic powerdomain. Some problems arise when we want to combine them in order to model computation in which both nondeterminism and probability are present. In particula...

  8. Polymerization of epoxy resins studied by positron annihilation

    International Nuclear Information System (INIS)

    The polymerization process of epoxy resins (bisphenol-A dicyanate) was studied using positron-annihilation spectroscopy. The polymerization from monomer to polymer through a polymerization reaction was followed by positron-annihilation lifetime spectroscopy measurements. Resins kept at curing temperatures (120, 150 and 200 oC) changed form from of powder to a solid through a liquid. The size of the intermolecular spaces of the solid samples increased along with the progress of polymerization. (author)

  9. Effects of Bound States on Dark Matter Annihilation

    OpenAIRE

    An, Haipeng; Wise, Mark B.; Zhang, Yue

    2016-01-01

    We study the impact of bound state formation on dark matter annihilation rates in models where dark matter interacts via a light mediator, the dark photon. We derive the general cross section for radiative capture into all possible bound states, and point out its non-trivial dependence on the dark matter velocity and the dark photon mass. For indirect detection, our result shows that dark matter annihilation inside bound states can play an important role in enhancing signal rates over the rat...

  10. Effects of Bound States on Dark Matter Annihilation

    OpenAIRE

    An, Haipeng; Wise, Mark B.; Zhang, Yue

    2016-01-01

    We study the impact of bound state formation on dark matter annihilation rates in models where dark matter interacts via a light mediator, the dark photon. We derive the general cross section for radiative capture into all possible bound states, and point out its non-trivial dependence on the dark matter velocity and the dark photon mass. For indirect detection, our result shows that dark matter annihilation inside bound states can play an important role in enhancing signal ...

  11. Rescattering effects in antiproton deuteron annihilation at intermediate energies

    International Nuclear Information System (INIS)

    In this paper higher order corrections to the single scattering term for antiproton deuteron annihilation are evaluated. As dominant corrections the initial state interaction of the antiprotons and the rescattering of pions are considered. For low spectator momenta, these corrections cause a strong modulation of the distribution in the invariant mass of the annihilation pions, which could modify the parameters of the resonant baryonium states

  12. Breit-Wigner Enhancement of Dark Matter Annihilation

    Energy Technology Data Exchange (ETDEWEB)

    Ibe, Masahiro; /SLAC; Murayama, Hitoshi; /Tokyo U., IPMU /UC, Berkeley /LBL, Berkeley; Yanagida, T.T.; /Tokyo U. /Tokyo U., IPMU

    2009-06-19

    We point out that annihilation of dark matter in the galactic halo can be enhanced relative to that in the early universe due to a Breit-Wigner tail, if the dark matter annihilates through a pole just below the threshold. This provides a new explanation to the 'boost factor' which is suggested by the recent data of the PAMELA, ATIC and PPB-BETS cosmic-ray experiments.

  13. Proton-antiproton reactions via double annihilation of quarks

    International Nuclear Information System (INIS)

    Exclusive baryon production in low energy panti p reactions is analysed within an internal double annihilation model using SU(6) wave functions. The annihilation is parametrised by intermediate gluon or meson states. We are able to predict several total cross sections for the reactions panti p→Banti B' which are of relevance for future experiments at LEAR. By examining the already existing data we show that the exchanged particle must be of vector type. (orig.)

  14. The Distribution and Annihilation of Dark Matter Around Black Holes

    OpenAIRE

    Schnittman, Jeremy D.

    2015-01-01

    We use a Monte Carlo code to calculate the geodesic orbits of test particles around Kerr black holes, generating a distribution function of both bound and unbound populations of dark matter particles. From this distribution function, we calculate annihilation rates and observable gamma-ray spectra for a few simple dark matter models. The features of these spectra are sensitive to the black hole spin, observer inclination, and detailed properties of the dark matter annihilation cross section a...

  15. Dark Matter Annihilation and the PAMELA, FERMI and ATIC Anomalies

    OpenAIRE

    El-Zant, A. A.; Khalil, S.; Okada, H.

    2009-01-01

    If dark matter (DM) annihilation accounts for the tantalizing excess of cosmic ray electron/positrons, as reported by the PAMELA, ATIC, HESS and FERMI observatories, then the implied annihilation cross section must be relatively large. This results, in the context of standard cosmological models, in very small relic DM abundances that are incompatible with astrophysical observations. We explore possible resolutions to this apparent conflict in terms of non-standard cosmological scenarios; pla...

  16. Search for Dark Matter Annihilation Signals from the Fornax Galaxy Cluster with H.E.S.S

    CERN Document Server

    Abramowski, A; Aharonian, F; Akhperjanian, A G; Anton, G; Balzer, A; Barnacka, A; de Almeida, U Barres; Becherini, Y; Becker, J; Behera, B; Bernlöhr, K; Birsin, E; Biteau, J; Bochow, A; Boisson, C; Bolmont, J; Bordas, P; Brucker, J; Brun, F; Brun, P; Bulik, T; Büsching, I; Carrigan, S; Casanova, S; Cerruti, M; Chadwick, P M; Charbonnier, A; Chaves, R C G; Cheesebrough, A; Clapson, A C; Coignet, G; Cologna, G; Conrad, J; Dalton, M; Daniel, M K; Davids, I D; Degrange, B; Deil, C; Dickinson, H J; Djannati-Ataï, A; Domainko, W; Drury, L O'C; Dubus, G; Dutson, K; Dyks, J; Dyrda, M; Egberts, K; Eger, P; Espigat, P; Fallon, L; Farnier, C; Fegan, S; Feinstein, F; Fernandes, M V; Fiasson, A; Fontaine, G; Förster, A; Füßling, M; Gallant, Y A; Gast, H; Gérard, L; Gerbig, D; Giebels, B; Glicenstein, J F; Glück, B; Goret, P; Göring, D; Häffner, S; Hague, J D; Hampf, D; Hauser, M; Heinz, S; Heinzelmann, G; Henri, G; Hermann, G; Hinton, J A; Hoffmann, A; Hofmann, W; Hofverberg, P; Holler, M; Horns, D; Jacholkowska, A; de Jager, O C; Jahn, C; Jamrozy, M; Jung, I; Kastendieck, M A; Katarzyński, K; Katz, U; Kaufmann, S; Keogh, D; Khangulyan, D; Khélifi, B; Klochkov, D; Kluźniak, W; Kneiske, T; Komin, Nu; Kosack, K; Kossakowski, R; Laffon, H; Lamanna, G; Lennarz, D; Lohse, T; Lopatin, A; Lu, C -C; Marandon, V; Marcowith, A; Masbou, J; Maurin, D; Maxted, N; Mayer, M; McComb, T J L; Medina, M C; Méhault, J; Moderski, R; Moulin, E; Naumann, C L; Naumann-Godo, M; de Naurois, M; Nedbal, D; Nekrassov, D; Nguyen, N; Nicholas, B; Niemiec, J; Nolan, S J; Ohm, S; Wilhelmi, E de Oña; Opitz, B; Ostrowski, M; Oya, I; Panter, M; Arribas, M Paz; Pedaletti1, G; Pelletier, G; Petrucci, P -O; Pita, S; Pühlhofer, G; Punch, M; Quirrenbach, A; Raue, M; Rayner, S M; Reimer, A; Reimer, O; Renaud, M; Reyes, R de los; Rieger, F; Ripken, J; Rob, L; Rosier-Lees, S; Rowell, G; Rudak, B; Rulten, C B; Ruppel, J; Sahakian, V; Sanchez, D A; Santangelo, A; Schlickeiser, R; Schöck, F M; Schulz, A; Schwanke, U; Schwarzburg, S; Schwemmer, S; Sheidaei, F; Skilton, J L; Sol, H; Spengler, G; Stawarz, Ł; Steenkamp, R; Stegmann, C; Stinzing, F; Stycz, K; Sushch, I; Szostek, A; Tavernet, J -P; Terrier, R; Tluczykont, M; Valerius, K; van Eldik, C; Vasileiadis, G; Venter, C; Vialle, J P; Viana, A; Vincent, P; Völk, H J; Volpe, F; Vorobiov, S; Vorster, M; Wagner, S J; Ward, M; White, R; Wierzcholska, A; Zacharias, M; Zajczyk, A; Zdziarski, A A; Zech, A; Zechlin, H -S

    2012-01-01

    The Fornax galaxy cluster was observed with the High Energy Stereoscopic System (H.E.S.S.) for a total live time of 14.5 hours, searching for very-high-energy (VHE, E>100 GeV) gamma-rays from dark matter (DM) annihilation. No significant signal was found in searches for point-like and extended emissions. Using several models of the DM density distribution, upper limits on the DM velocity-weighted annihilation cross-section as a function of the DM particle mass are derived. Constraints are derived for different DM particle models, such as those arising from Kaluza-Klein and supersymmetric models. Various annihilation final states are considered. Possible enhancements of the DM annihilation gamma-ray flux, due to DM substructures of the DM host halo, or from the Sommerfeld effect, are studied. Additional gamma-ray contributions from internal bremsstrahlung and inverse Compton radiation are also discussed. For a DM particle mass of 1 TeV, the exclusion limits at 95% of confidence level reach values of ~ 10^-23...

  17. The Characterization of the Gamma-Ray Signal from the Central Milky Way: A Compelling Case for Annihilating Dark Matter

    Energy Technology Data Exchange (ETDEWEB)

    Daylan, Tansu [Harvard Univ., Cambridge, MA (United States); Finkbeiner, Douglas P. [Harvard-Smithsonian Center, Cambridge, MA (United States); Hooper, Dan [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Linden, Tim [Univ. of Illinois at Chicago, Chicago, IL (United States); Portillo, Stephen K. N. [Harvard-Smithsonian Center, Cambridge, MA (United States); Rodd, Nicholas L. [Massachusetts Institute of Technology, Boston, MA (United States); Slatyer, Tracy R. [Institute for Advanced Study, Princeton, NJ (United States)

    2014-02-26

    Past studies have identified a spatially extended excess of ~1-3 GeV gamma rays from the region surrounding the Galactic Center, consistent with the emission expected from annihilating dark matter. We revisit and scrutinize this signal with the intention of further constraining its characteristics and origin. By applying cuts to the Fermi event parameter CTBCORE, we suppress the tails of the point spread function and generate high resolution gamma-ray maps, enabling us to more easily separate the various gamma-ray components. Within these maps, we find the GeV excess to be robust and highly statistically significant, with a spectrum, angular distribution, and overall normalization that is in good agreement with that predicted by simple annihilating dark matter models. For example, the signal is very well fit by a 31-40 GeV dark matter particle annihilating to b quarks with an annihilation cross section of sigma v = (1.4-2.0) x 10^-26 cm^3/s (normalized to a local dark matter density of 0.3 GeV/cm^3). Furthermore, we confirm that the angular distribution of the excess is approximately spherically symmetric and centered around the dynamical center of the Milky Way (within ~0.05 degrees of Sgr A*), showing no sign of elongation along or perpendicular to the Galactic Plane. The signal is observed to extend to at least 10 degrees from the Galactic Center, disfavoring the possibility that this emission originates from millisecond pulsars.

  18. Time and probability in quantum cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Greensite, J. (San Francisco State Univ., CA (USA). Dept. of Physics and Astronomy)

    1990-10-01

    A time function, an exactly conserved probability measure, and a time-evolution equation (related to the Wheeler-DeWitt equation) are proposed for quantum cosmology. The time-integral of the probability measure is the measure proposed by Hawking and Page. The evolution equation reduces to the Schroedinger equation, and probability measure to the Born measure, in the WKB approximation. The existence of this 'Schroedinger-limit', which involves a cancellation of time-dependencies in the probability density between the WKB prefactor and integration measure, is a consequence of laplacian factor ordering in the Wheeler-DeWitt equation. (orig.).

  19. Economy, probability and risk

    Directory of Open Access Journals (Sweden)

    Elena Druica

    2007-05-01

    Full Text Available The science of probabilities has earned a special place because it tried through its concepts to build a bridge between theory and experimenting.As a formal notion which by definition does not lead to polemic, probability, nevertheless, meets a series of difficulties of interpretation whenever the probability must be applied to certain particular situations.Usually, the economic literature brings into discussion two interpretations of the concept of probability:the objective interpretation often found under the name of frequency or statistical interpretation and the subjective or personal interpretation. Surprisingly, the third appproach is excluded:the logical interpretation.The purpose of the present paper is to study some aspects of the subjective and logical interpretation of the probability, aswell as the implications in the economics.

  20. Heavy ion collisions, the quark-gluon plasma and antinucleon annihilation

    International Nuclear Information System (INIS)

    Studies in high energy physics have indicated that nucleon and mesons are composed of quarks confined in bags by the strong colours mediated by gluons. It is reasonably expected that at suitably high baryon density and temperature of the nucleus, these bags of nucleon and mesons fuse into a big bag of quarks or gluons i.e. hadronic matter undergoes transition to a quark-gluon phase. Two techniques to achieve this transition in a laboratory are: (1) collision of two heavy nuclei, and (2) annihilation of antinucleons and antinuclei in nuclear matter. Theoretical studies as well as experimental studies associated with the transition to quark-gluon phase are reviewed. (author)

  1. Exact Probability Distribution versus Entropy

    Directory of Open Access Journals (Sweden)

    Kerstin Andersson

    2014-10-01

    Full Text Available The problem  addressed concerns the determination of the average number of successive attempts  of guessing  a word of a certain  length consisting of letters with given probabilities of occurrence. Both first- and second-order approximations  to a natural language are considered.  The guessing strategy used is guessing words in decreasing order of probability. When word and alphabet sizes are large, approximations  are necessary in order to estimate the number of guesses.  Several  kinds of approximations  are discussed demonstrating moderate requirements regarding both memory and central processing unit (CPU time. When considering realistic  sizes of alphabets and words (100, the number of guesses can be estimated  within minutes with reasonable accuracy (a few percent and may therefore constitute an alternative to, e.g., various entropy expressions.  For many probability  distributions,  the density of the logarithm of probability products is close to a normal distribution. For those cases, it is possible to derive an analytical expression for the average number of guesses. The proportion  of guesses needed on average compared to the total number  decreases almost exponentially with the word length. The leading term in an asymptotic  expansion can be used to estimate the number of guesses for large word lengths. Comparisons with analytical lower bounds and entropy expressions are also provided.

  2. Positron annihilation study of spin transition in metal complex: [Fe(Phen)2(NCS)2] (Paper No. HF-04)

    International Nuclear Information System (INIS)

    In this paper, the use of Doppler-broadened annihilation lineshape as a technique to probe spin transition in a classical spin transition complex Fe(Phen)2(NCS)2 in solid state is demonstrated. This technique is simple in application compared to other established techniques such as Moessbauer, susceptibility measurement, ESR etc. In addition it provides correlated information on local electron density and momentum distribution. (author)

  3. Effect of Black Holes in Local Dwarf Spheroidal Galaxies on Gamma-Ray Constraints on Dark Matter Annihilation

    OpenAIRE

    Gonzalez-Morales, Alma X.; Profumo, Stefano; Queiroz, Farinaldo S.(Department of Physics, Santa Cruz Institute for Particle Physics, University of California, 95064, Santa Cruz, CA, USA)

    2014-01-01

    Recent discoveries of optical signatures of black holes in dwarf galaxies indicates that low-mass galaxies can indeed host intermediate massive black holes. This motivates the assessment of the resulting effect on the host dark matter density profile, and the consequences for the constraints on the plane of the dark matter annihilation cross section versus mass, stemming from the non-observation of gamma rays from local dwarf spheroidals with the Fermi Large Area Telescope. We compute the den...

  4. On the Annihilation Rate of WIMPs

    CERN Document Server

    Baumgart, Matthew; Vaidya, Varun

    2014-01-01

    We develop a formalism that allows one to systematically calculate the WIMP annihilation rate into gamma rays whose energy far exceeds the weak scale. A factorization theorem is presented which separates the radiative corrections stemming from initial state potential interactions from loops involving the final state. This separation allows us to go beyond the fixed the order calculation, which is polluted by large infrared logarithms. For the case of Majorana WIMPs transforming in the adjoint representation of SU(2), we present the result for the resummed rate at leading double log accuracy in terms of two initial state partial wave matrix elements and one hard matching coefficient. For a given model, one may calculate the cross section by calculating the tree level matching coefficient and determining the value of a local four fermion operator. We find that the effects of resummation can be as large as 100% for a 20 TeV WIMP. The generalization of the formalism to other types of WIMPs is discussed.

  5. Antimatter annihilation detection with AEgIS

    CERN Document Server

    Gligorova, Angela

    2015-01-01

    AE ̄ gIS (Antimatter Experiment: Gravity, Interferometry, Spectroscopy) is an antimatter exper- iment based at CERN, whose primary goal is to carry out the first direct measurement of the Earth’s gravitational acceleration on antimatter. A precise measurement of antimatter gravity would be the first precision test of the Weak Equivalence Principle for antimatter. The principle of the experiment is based on the formation of antihydrogen through a charge exchange reaction between laser excited (Rydberg) positronium and ultra-cold antiprotons. The antihydrogen atoms will be accelerated by an inhomogeneous electric field (Stark acceleration) to form a pulsed cold beam. The free fall of the antihydrogen due to Earth’s gravity will be measured using a moiré de- flectometer and a hybrid position detector. This detector is foreseen to consist of an active silicon part, where the annihilation of antihydrogen takes place, followed by an emulsion part coupled to a fiber time-of-flight detector. This overview prese...

  6. Positron annihilation in medical substances of insulin

    International Nuclear Information System (INIS)

    Positrons lifetimes were measured in medical substances of insulin (human and animal), differing as far as the degree of purity and time of their activity in the organism are concerned. In all of the cases the spectrum of positron lifetime was distributed into three components, with the long-life component ranging from 1.8 to 2.08 ns and the intensity taking on values from 18% to 24%. Making use of Tao-Eldrup model, the average radius of the free volume, in which o-Ps annihilated, and the degree of filling in the volume were determined. It was found that the value of the long-life component for human insulin is higher than that of animal insulin. Moreover, the value of this component clearly depends on the manner of purification of the insulin. It was also noticed that there occurs a correlation between the value of this component and the time after which it begins to be active in the organism, as well as the total time of its activity. (author)

  7. Annihilation of antiproton on deuteron at rest

    International Nuclear Information System (INIS)

    The system of Faddeev equations for amplitudes of anti pD iteraction at rest accounting for higher partial anti NN waves is derived. From its solution the total and elastic anti pD cross sections are calculated. Predictions for the missing-mass spectrum in the anti pD annihilation are made. The P-wave anti NN states give small contribution to the anti pD cross section at rest, the theoretical value of the latter being less than the experimental cross section extrapolated to the threshold. Let us emphasize that the total anti pD cross section depending weakly on the radii of anti NN interactions is sensitive to the values of the anti NN scattering lengths. Experimental data for anti pD cross sections at rest can be obtained only by extrapolation procedure. Henceforth it is very important to investigate the anti pD interactions at low but non-zero momenta where the direct comparison to the experiment is possible

  8. Positron Annihilation in Medical Substances of Insulin

    Science.gov (United States)

    Pietrzak, R.; Szatanik, R.

    2005-05-01

    Positrons lifetimes were measured in medical substances of insulin (human and animal), differing as far as the degree of purity and time of their activity in the organism are concerned. In all of the cases the spectrum of positron lifetime was distributed into three components, with the long-life component ranging from 1.8 to 2.08 ns and the intensity taking on values from 18 to 24%. Making use of Tao-Eldrup model, the average radius of the free volume, in which o-Ps annihilated, and the degree of filling in the volume were determined. It was found that the value of the long-life component for human insulin is higher than that of animal insulin. Moreover, the value of this component clearly depends on the manner of purification of the insulin. It was also noticed that there occurs a correlation between the value of this component and the time after which it begins to be active in the organism, as well as the total time of its activity.

  9. Charm production in antiproton-proton annihilation

    CERN Document Server

    Haidenbauer, J

    2010-01-01

    We study the production of charmed mesons (D) and baryons (Lambda_c) in antiproton-proton (app) annihilation close to their respective production thresholds. The elementary charm production process is described by either baryon/meson exchange or by quark/gluon dynamics. Effects of the interactions in the initial and final states are taken into account rigorously. The calculations are performed in close analogy to our earlier study on app -> antiLambda-Lambda and app -> antiK-K by connecting the processes via SU(4) flavor symmetry. Our predictions for the antiLambda_c-Lambda_c production cross section are in the order of 1 to 7 mb, i.e. a factor of around 10-70 smaller than the corresponding cross sections for antiLambda-Lambda However, they are 100 to 1000 times larger than predictions of other model calculations in the literature. On the other hand, the resulting cross sections for antiD-D production are found to be in the order of 10^{-2} -- 10^{-1} microbarn and they turned out to be comparable to those ob...

  10. The concept of probability

    International Nuclear Information System (INIS)

    The concept of probability is now, and always has been, central to the debate on the interpretation of quantum mechanics. Furthermore, probability permeates all of science, as well as our every day life. The papers included in this volume, written by leading proponents of the ideas expressed, embrace a broad spectrum of thought and results: mathematical, physical epistemological, and experimental, both specific and general. The contributions are arranged in parts under the following headings: Following Schroedinger's thoughts; Probability and quantum mechanics; Aspects of the arguments on nonlocality; Bell's theorem and EPR correlations; Real or Gedanken experiments and their interpretation; Questions about irreversibility and stochasticity; and Epistemology, interpretation and culture. (author). refs.; figs.; tabs

  11. Probability and Measure

    CERN Document Server

    Billingsley, Patrick

    2012-01-01

    Praise for the Third Edition "It is, as far as I'm concerned, among the best books in math ever written....if you are a mathematician and want to have the top reference in probability, this is it." (Amazon.com, January 2006) A complete and comprehensive classic in probability and measure theory Probability and Measure, Anniversary Edition by Patrick Billingsley celebrates the achievements and advancements that have made this book a classic in its field for the past 35 years. Now re-issued in a new style and format, but with the reliable content that the third edition was revered for, this

  12. Probability and Bayesian statistics

    CERN Document Server

    1987-01-01

    This book contains selected and refereed contributions to the "Inter­ national Symposium on Probability and Bayesian Statistics" which was orga­ nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa­ pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub­ jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...

  13. Probabilities in physics

    CERN Document Server

    Hartmann, Stephan

    2011-01-01

    Many results of modern physics--those of quantum mechanics, for instance--come in a probabilistic guise. But what do probabilistic statements in physics mean? Are probabilities matters of objective fact and part of the furniture of the world, as objectivists think? Or do they only express ignorance or belief, as Bayesians suggest? And how are probabilistic hypotheses justified and supported by empirical evidence? Finally, what does the probabilistic nature of physics imply for our understanding of the world? This volume is the first to provide a philosophical appraisal of probabilities in all of physics. Its main aim is to make sense of probabilistic statements as they occur in the various physical theories and models and to provide a plausible epistemology and metaphysics of probabilities. The essays collected here consider statistical physics, probabilistic modelling, and quantum mechanics, and critically assess the merits and disadvantages of objectivist and subjectivist views of probabilities in these fie...

  14. Stochastic Programming with Probability

    CERN Document Server

    Andrieu, Laetitia; Vázquez-Abad, Felisa

    2007-01-01

    In this work we study optimization problems subject to a failure constraint. This constraint is expressed in terms of a condition that causes failure, representing a physical or technical breakdown. We formulate the problem in terms of a probability constraint, where the level of "confidence" is a modelling parameter and has the interpretation that the probability of failure should not exceed that level. Application of the stochastic Arrow-Hurwicz algorithm poses two difficulties: one is structural and arises from the lack of convexity of the probability constraint, and the other is the estimation of the gradient of the probability constraint. We develop two gradient estimators with decreasing bias via a convolution method and a finite difference technique, respectively, and we provide a full analysis of convergence of the algorithms. Convergence results are used to tune the parameters of the numerical algorithms in order to achieve best convergence rates, and numerical results are included via an example of ...

  15. Probability an introduction

    CERN Document Server

    Grimmett, Geoffrey

    2014-01-01

    Probability is an area of mathematics of tremendous contemporary importance across all aspects of human endeavour. This book is a compact account of the basic features of probability and random processes at the level of first and second year mathematics undergraduates and Masters' students in cognate fields. It is suitable for a first course in probability, plus a follow-up course in random processes including Markov chains. A special feature is the authors' attention to rigorous mathematics: not everything is rigorous, but the need for rigour is explained at difficult junctures. The text is enriched by simple exercises, together with problems (with very brief hints) many of which are taken from final examinations at Cambridge and Oxford. The first eight chapters form a course in basic probability, being an account of events, random variables, and distributions - discrete and continuous random variables are treated separately - together with simple versions of the law of large numbers and the central limit th...

  16. Probability in physics

    CERN Document Server

    Hemmo, Meir

    2012-01-01

    What is the role and meaning of probability in physical theory, in particular in two of the most successful theories of our age, quantum physics and statistical mechanics? Laws once conceived as universal and deterministic, such as Newton‘s laws of motion, or the second law of thermodynamics, are replaced in these theories by inherently probabilistic laws. This collection of essays by some of the world‘s foremost experts presents an in-depth analysis of the meaning of probability in contemporary physics. Among the questions addressed are: How are probabilities defined? Are they objective or subjective? What is their  explanatory value? What are the differences between quantum and classical probabilities? The result is an informative and thought-provoking book for the scientifically inquisitive. 

  17. Estimating Subjective Probabilities

    DEFF Research Database (Denmark)

    Andersen, Steffen; Fountain, John; Harrison, Glenn W.;

    2014-01-01

    either construct elicitation mechanisms that control for risk aversion, or construct elicitation mechanisms which undertake 'calibrating adjustments' to elicited reports. We illustrate how the joint estimation of risk attitudes and subjective probabilities can provide the calibration adjustments that...

  18. Estimating Subjective Probabilities

    DEFF Research Database (Denmark)

    Andersen, Steffen; Fountain, John; Harrison, Glenn W.;

    either construct elicitation mechanisms that control for risk aversion, or construct elicitation mechanisms which undertake “calibrating adjustments” to elicited reports. We illustrate how the joint estimation of risk attitudes and subjective probabilities can provide the calibration adjustments that...

  19. Probability and Statistical Inference

    OpenAIRE

    Prosper, Harrison B.

    2006-01-01

    These lectures introduce key concepts in probability and statistical inference at a level suitable for graduate students in particle physics. Our goal is to paint as vivid a picture as possible of the concepts covered.

  20. Probability with Roulette

    Science.gov (United States)

    Marshall, Jennings B.

    2007-01-01

    This article describes how roulette can be used to teach basic concepts of probability. Various bets are used to illustrate the computation of expected value. A betting system shows variations in patterns that often appear in random events.

  1. Monte Carlo transition probabilities

    OpenAIRE

    Lucy, L. B.

    2001-01-01

    Transition probabilities governing the interaction of energy packets and matter are derived that allow Monte Carlo NLTE transfer codes to be constructed without simplifying the treatment of line formation. These probabilities are such that the Monte Carlo calculation asymptotically recovers the local emissivity of a gas in statistical equilibrium. Numerical experiments with one-point statistical equilibrium problems for Fe II and Hydrogen confirm this asymptotic behaviour. In addition, the re...

  2. Bayesian default probability models

    OpenAIRE

    Andrlíková, Petra

    2014-01-01

    This paper proposes a methodology for default probability estimation for low default portfolios, where the statistical inference may become troublesome. The author suggests using logistic regression models with the Bayesian estimation of parameters. The piecewise logistic regression model and Box-Cox transformation of credit risk score is used to derive the estimates of probability of default, which extends the work by Neagu et al. (2009). The paper shows that the Bayesian models are more acc...

  3. Pressure dependence of positron annihilation in Si

    International Nuclear Information System (INIS)

    The pressure dependence of the electron-positron and the electron-electron momentum densities in silicon are studied. The observations that the electron-positron momentum density increases more rapidly with pressure than the electron-electron momentum density alone is explained in terms of increased positron penetration into the ion cores. The computational technique used here is based on the independent-particle model (IPM) coupled with the use of the electron pseudo-wave functions. (orig.)

  4. Experimental Probability in Elementary School

    Science.gov (United States)

    Andrew, Lane

    2009-01-01

    Concepts in probability can be more readily understood if students are first exposed to probability via experiment. Performing probability experiments encourages students to develop understandings of probability grounded in real events, as opposed to merely computing answers based on formulae.

  5. The pleasures of probability

    CERN Document Server

    Isaac, Richard

    1995-01-01

    The ideas of probability are all around us. Lotteries, casino gambling, the al­ most non-stop polling which seems to mold public policy more and more­ these are a few of the areas where principles of probability impinge in a direct way on the lives and fortunes of the general public. At a more re­ moved level there is modern science which uses probability and its offshoots like statistics and the theory of random processes to build mathematical descriptions of the real world. In fact, twentieth-century physics, in embrac­ ing quantum mechanics, has a world view that is at its core probabilistic in nature, contrary to the deterministic one of classical physics. In addition to all this muscular evidence of the importance of probability ideas it should also be said that probability can be lots of fun. It is a subject where you can start thinking about amusing, interesting, and often difficult problems with very little mathematical background. In this book, I wanted to introduce a reader with at least a fairl...

  6. Improving Ranking Using Quantum Probability

    OpenAIRE

    Melucci, Massimo

    2011-01-01

    The paper shows that ranking information units by quantum probability differs from ranking them by classical probability provided the same data used for parameter estimation. As probability of detection (also known as recall or power) and probability of false alarm (also known as fallout or size) measure the quality of ranking, we point out and show that ranking by quantum probability yields higher probability of detection than ranking by classical probability provided a given probability of ...

  7. Enhanced neutrino signals from dark matter annihilation in the Sun via metastable mediators

    International Nuclear Information System (INIS)

    We calculate the neutrino signal resulting from annihilation of secluded dark matter in the Sun. In this class of models, dark matter annihilates first into metastable mediators, which subsequently decay into Standard Model particles. If the mediators are long lived, they will propagate out from the dense solar core before decaying. High energy neutrinos undergo absorption in the Sun. In the standard scenario in which neutrinos are produced directly in the centre of the Sun, absorption is relevant for E∼>100 GeV, resulting in a significant suppression of the neutrino spectrum beyond E ∼ 1 TeV. In the secluded dark matter scenario, the neutrino signal is greatly enhanced because neutrinos are injected away from the core, at lower density. Since the solar density falls exponentially with radius, metastable mediators have a significant effect on the neutrino flux, even for decay lengths which are small compared to the solar radius. Moreover, since neutrino detection cross sections grow with energy, this enhancement of the high energy region of the neutrino spectrum would have a large effect on overall event rates

  8. Photon from the annihilation process with CGC in the $pA$ collision

    CERN Document Server

    Benic, Sanjin

    2016-01-01

    We discuss the photon production in the $pA$ collision in a framework of the color glass condensate (CGC). We work in a regime where the color density $\\rho_A$ of the nucleus is large enough to justify the CGC treatment, while soft gluons in the proton are dominant over quarks but do not yet belong to the CGC regime. In this semi-CGC regime for the proton, we can still perform a systematic expansion in powers of the color density $\\rho_p$ of the proton. The leading-order contributions to the photon production appear from the Bremsstrahlung and the annihilation processes involving quarks from a gluon sourced by $\\rho_p$. We analytically derive an expression for the annihilation contribution to the photon production rate and numerically find that a thermal exponential form gives the best fit with an effective temperature $\\sim 0.5Q_s$ where $Q_s$ is the saturation momentum of the nucleus.

  9. CMB Constraints On The Thermal WIMP Annihilation Cross Section

    CERN Document Server

    Steigman, Gary

    2015-01-01

    A thermal relic, often referred to as a weakly interacting massive particle (WIMP),is a particle produced during the early evolution of the Universe whose relic abundance (e.g., at present) depends only on its mass and its thermally averaged annihilation cross section (annihilation rate factor) sigma*v_ann. Late time WIMP annihilation has the potential to affect the cosmic microwave background (CMB) power spectrum. Current observational constraints on the absence of such effects provide bounds on the mass and the annihilation cross section of relic particles that may, but need not be dark matter candidates. For a WIMP that is a dark matter candidate, the CMB constraint sets an upper bound to the annihilation cross section, leading to a lower bound to their mass that depends on whether or not the WIMP is its own antiparticle. For a self-conjugate WIMP, m_min = 50f GeV, where f is an electromagnetic energy efficiency factor. For a non self-conjugate WIMP, the minimum mass is a factor of two larger. For a WIMP t...

  10. The Isotropic Radio Background and Annihilating Dark Matter

    Energy Technology Data Exchange (ETDEWEB)

    Hooper, Dan [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Belikov, Alexander V. [Institut d' Astrophysique (France); Jeltema, Tesla E. [Univ. of California, Santa Cruz, CA (United States); Linden, Tim [Univ. of California, Santa Cruz, CA (United States); Profumo, Stefano [Univ. of California, Santa Cruz, CA (United States); Slatyer, Tracy R. [Princeton Univ., Princeton, NJ (United States)

    2012-11-01

    Observations by ARCADE-2 and other telescopes sensitive to low frequency radiation have revealed the presence of an isotropic radio background with a hard spectral index. The intensity of this observed background is found to exceed the flux predicted from astrophysical sources by a factor of approximately 5-6. In this article, we consider the possibility that annihilating dark matter particles provide the primary contribution to the observed isotropic radio background through the emission of synchrotron radiation from electron and positron annihilation products. For reasonable estimates of the magnetic fields present in clusters and galaxies, we find that dark matter could potentially account for the observed radio excess, but only if it annihilates mostly to electrons and/or muons, and only if it possesses a mass in the range of approximately 5-50 GeV. For such models, the annihilation cross section required to normalize the synchrotron signal to the observed excess is sigma v ~ (0.4-30) x 10^-26 cm^3/s, similar to the value predicted for a simple thermal relic (sigma v ~ 3 x 10^-26 cm^3/s). We find that in any scenario in which dark matter annihilations are responsible for the observed excess radio emission, a significant fraction of the isotropic gamma ray background observed by Fermi must result from dark matter as well.

  11. Dipole Moment Bounds on Scalar Dark Matter Annihilation

    CERN Document Server

    Fukushima, Keita

    2013-01-01

    We consider a scalar dark matter annihilations to light leptons mediated by charged exotic fermions. The interaction of this model also adds a correction to dipole moments of light leptons. In the simplified model, these processes will depend upon the same coupling constants. The tight experimental bounds on the dipole moments of light leptons will constrain the coupling constants. Consequently, this bound will then limit the annihilations. We will produce this dipole moment bounds on the annihilation. From this analysis, we report that the bound on annihilation to the electrons is $4.0\\times10^{-7}\\pb$ (g-2) + $8.8\\times 10^{-15}\\pb$ (EDM) and the muons is $5.6\\times 10^{-4}\\pb$ (g-2) + $180\\pb$ (EDM), in the limit where the mediator is much heavier than dark matter. The parentheses indicate the dipole moment used to obtain the values. We note that only the annihilation to muons through a CP-violating (EDM) coupling is not excluded from indirect detection experiments.

  12. Probability of causation

    International Nuclear Information System (INIS)

    New Zealand population and cancer statistics have been used to derive the probability that an existing cancer in an individual was the result of a known exposure to radiation. Hypothetical case histories illustrate how sex, race, age at exposure, age at presentation with disease, and the type of cancer affect this probability. The method can be used now to identify claims in which a link between exposure and disease is very strong or very weak, and the types of cancer and population sub-groups for which radiation is most likely to be the causative agent. Advantages and difficulties in using a probability of causation approach in legal or compensation hearings are outlined. The approach is feasible for any carcinogen for which reasonable risk estimates can be made

  13. Minimum Probability Flow Learning

    CERN Document Server

    Sohl-Dickstein, Jascha; DeWeese, Michael R

    2009-01-01

    Learning in probabilistic models is often severely hampered by the general intractability of the normalization factor and its derivatives. Here we propose a new learning technique that obviates the need to compute an intractable normalization factor or sample from the equilibrium distribution of the model. This is achieved by establishing dynamics that would transform the observed data distribution into the model distribution, and then setting as the objective the minimization of the initial flow of probability away from the data distribution. Score matching, minimum velocity learning, and certain forms of contrastive divergence are shown to be special cases of this learning technique. We demonstrate the application of minimum probability flow learning to parameter estimation in Ising models, deep belief networks, multivariate Gaussian distributions and a continuous model with a highly general energy function defined as a power series. In the Ising model case, minimum probability flow learning outperforms cur...

  14. The Pauli equation for probability distributions

    International Nuclear Information System (INIS)

    The tomographic-probability distribution for a measurable coordinate and spin projection is introduced to describe quantum states as an alternative to the density matrix. An analogue of the Pauli equation for the spin-1/2 particle is obtained for such a probability distribution instead of the usual equation for the wavefunction. Examples of the tomographic description of Landau levels and coherent states of a charged particle moving in a constant magnetic field are presented. (author)

  15. The Pauli equation for probability distributions

    Energy Technology Data Exchange (ETDEWEB)

    Mancini, S. [INFM, Dipartimento di Fisica, Universita di Milano, Milan (Italy). E-mail: Stefano.Mancini@mi.infn.it; Man' ko, O.V. [P.N. Lebedev Physical Institute, Moscow (Russian Federation). E-mail: Olga.Manko@sci.lebedev.ru; Man' ko, V.I. [INFM, Dipartimento di Matematica e Fisica, Universita di Camerino, Camerino (Italy). E-mail: Vladimir.Manko@sci.lebedev.ru; Tombesi, P. [INFM, Dipartimento di Matematica e Fisica, Universita di Camerino, Camerino (Italy). E-mail: Paolo.Tombesi@campus.unicam.it

    2001-04-27

    The tomographic-probability distribution for a measurable coordinate and spin projection is introduced to describe quantum states as an alternative to the density matrix. An analogue of the Pauli equation for the spin-1/2 particle is obtained for such a probability distribution instead of the usual equation for the wavefunction. Examples of the tomographic description of Landau levels and coherent states of a charged particle moving in a constant magnetic field are presented. (author)

  16. Introduction to imprecise probabilities

    CERN Document Server

    Augustin, Thomas; de Cooman, Gert; Troffaes, Matthias C M

    2014-01-01

    In recent years, the theory has become widely accepted and has been further developed, but a detailed introduction is needed in order to make the material available and accessible to a wide audience. This will be the first book providing such an introduction, covering core theory and recent developments which can be applied to many application areas. All authors of individual chapters are leading researchers on the specific topics, assuring high quality and up-to-date contents. An Introduction to Imprecise Probabilities provides a comprehensive introduction to imprecise probabilities, includin

  17. Choice probability generating functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel

    2010-01-01

    This paper establishes that every random utility discrete choice model (RUM) has a representation that can be characterized by a choice-probability generating function (CPGF) with specific properties, and that every function with these specific properties is consistent with a RUM. The choice...... probabilities from the RUM are obtained from the gradient of the CPGF. Mixtures of RUM are characterized by logarithmic mixtures of their associated CPGF. The paper relates CPGF to multivariate extreme value distributions, and reviews and extends methods for constructing generating functions for applications...

  18. Negative Probabilities and Contextuality

    CERN Document Server

    de Barros, J Acacio; Oas, Gary

    2015-01-01

    There has been a growing interest, both in physics and psychology, in understanding contextuality in experimentally observed quantities. Different approaches have been proposed to deal with contextual systems, and a promising one is contextuality-by-default, put forth by Dzhafarov and Kujala. The goal of this paper is to present a tutorial on a different approach: negative probabilities. We do so by presenting the overall theory of negative probabilities in a way that is consistent with contextuality-by-default and by examining with this theory some simple examples where contextuality appears, both in physics and psychology.

  19. Classic Problems of Probability

    CERN Document Server

    Gorroochurn, Prakash

    2012-01-01

    "A great book, one that I will certainly add to my personal library."—Paul J. Nahin, Professor Emeritus of Electrical Engineering, University of New Hampshire Classic Problems of Probability presents a lively account of the most intriguing aspects of statistics. The book features a large collection of more than thirty classic probability problems which have been carefully selected for their interesting history, the way they have shaped the field, and their counterintuitive nature. From Cardano's 1564 Games of Chance to Jacob Bernoulli's 1713 Golden Theorem to Parrondo's 1996 Perplexin

  20. Branching ratios for pbarp annihilation at rest into two-body final states[13.75.Cs; 13.85.Ni; 36.10.Gv; Antiproton annihilation; Two-body meson production; Branching ratios; Protonium cascade

    Energy Technology Data Exchange (ETDEWEB)

    Abele, A.; Adomeit, J.; Amsler, C.; Baker, C.A.; Barnett, B.M.; Batty, C.J.; Benayoun, M.; Bischoff, S.; Bluem, P.; Braune, K.; Bugg, D.V.; Case, T.; Crowe, K.M.; Degener, T.; Doser, M.; Duennweber, W.; Engelhardt, D.; Faessler, M.A.; Giarritta, P.; Haddock, R.P.; Heinsius, F.H.; Heinzelmann, M.; Herbstrith, A.; Herz, M.; Hessey, N.P.; Hidas, P.; Hodd, C.; Holtzhaussen, C.; Jamnik, D.; Kalinowsky, H.; Kammel, P.; Kisiel, J.; Klempt, E.; Koch, H.; Kunze, M.; Kurilla, U.; Lakata, M.; Landua, R.; Matthaey, H.; McCrady, R.; Meier, J.; Meyer, C.A.; Montanet, L.; Ouared, R.; Peters, K.; Pick, B.; Ratajczak, M.; Regenfus, C.; Roethel, W.; Spanier, S.; Stoeck, H.; Strassburger, C.; Strohbusch, U.; Suffert, M.; Suh, J.S.; Thoma, U.; Tischhaeuser, M.; Uman, I.; Voelcker, C.; Wallis-Plachner, S.; Walther, D.; Wiedner, U. E-mail: ulrich.wiedner@tsl.nu.se; Wittmack, K.; Zou, B.S

    2001-01-01

    Measurements of two-body branching ratios for pbarp annihilations at rest in liquid and gaseous (12{rho}{sub STP}) hydrogen are reported. Channels studied are pbarp{yields}{pi}{sup 0}{pi}{sup 0},{pi}{sup 0}{eta}, K{sup 0}{sub S}K{sup 0}{sub L}, K{sup +}K{sup -}. The branching ratio for the {pi}{sup 0}{pi}{sup 0} channel in liquid H{sub 2} is measured to be (6.14{+-}0.40)x10{sup -4}. The results are compared with those from other experiments. The fraction of P-state annihilation for a range of target densities from 0.002{rho}{sub STP} to liquid H{sub 2} is determined. Values obtained include 0.11{+-}0.02 in liquid H{sub 2} and 0.48{+-}0.04 in 12{rho}{sub STP} H{sub 2} gas.

  1. Positron annihilation spectroscopy - a non-destructive method for lifetime prediction in the field of dynamical material testing

    International Nuclear Information System (INIS)

    The fatigue behavior of iron-based materials has been investigated by rotating bending testing, employing positron annihilation spectroscopy to probe defects on the atomic level. Positron annihilation spectra have been recorded at various stages of material fatigue. The defect density has been determined by analysing the line shape of the Doppler broadening of the annihilation radiation in the photo peak. The line shape parameter (S parameter), a measure of the defect density, showed a linear relation to the logarithm of the number of loadings, thus from only a small number of loadings it is possible to determine the remaining useful life of the sample. Furthermore, along the longitudinal sample axis spatially resolved line-scans are taken using the Bonn Positron Microprobe. Due to the special sample geometry, the stress gradient allows to obtain the S parameter for different values of the applied load using the very same sample. This leads to a way to determine a complete Woehler diagram using a non-destructive method for just one sample. (orig.)

  2. Dark matter annihilation with s-channel internal Higgsstrahlung

    Science.gov (United States)

    Kumar, Jason; Liao, Jiajun; Marfatia, Danny

    2016-08-01

    We study the scenario of fermionic dark matter that annihilates to standard model fermions through an s-channel axial vector mediator. We point out that the well-known chirality suppression of the annihilation cross section can be alleviated by s-channel internal Higgsstrahlung. The shapes of the cosmic ray spectra are identical to that of t-channel internal Higgsstrahlung in the limit of a heavy mediating particle. Unlike the general case of t-channel bremsstrahlung, s-channel Higgsstrahlung can be the dominant annihilation process even for Dirac dark matter. Since the s-channel mediator can be a standard model singlet, collider searches for the mediator are easily circumvented.

  3. Positron annihilation spectroscopy applied to silicon-based materials

    CERN Document Server

    Taylor, J W

    2000-01-01

    deposition on silicon substrates has been examined. The systematic correlations observed between the nitrogen content of the films and both the fitted Doppler parameters and the positron diffusion lengths are discussed in detail. Profiling measurements of silicon nitride films deposited on silicon substrates and subsequently implanted with silicon ions at a range of fluences were also performed. For higher implantation doses, damage was seen to extend beyond the film layers and into the silicon substrates. Subsequent annealing of two of the samples was seen to have a significant influence on the nature of the films. Positron annihilation spectroscopy, in conjunction with a variable-energy positron beam, has been employed to probe non-destructively the surface and near-surface regions of a selection of technologically important silicon-based samples. By measuring the Doppler broadening of the 511 keV annihilation lineshape, information on the positrons' microenvironment prior to annihilation may be obtained. T...

  4. A Critical Reevaluation of Radio Constraints on Annihilating Dark Matter

    Energy Technology Data Exchange (ETDEWEB)

    Cholis, Ilias [Fermilab; Hooper, Dan [Fermilab; Linden, Tim [Chicago U., KICP

    2015-04-03

    A number of groups have employed radio observations of the Galactic center to derive stringent constraints on the annihilation cross section of weakly interacting dark matter. In this paper, we show that electron energy losses in this region are likely to be dominated by inverse Compton scattering on the interstellar radiation field, rather than by synchrotron, considerably relaxing the constraints on the dark matter annihilation cross section compared to previous works. Strong convective winds, which are well motivated by recent observations, may also significantly weaken synchrotron constraints. After taking these factors into account, we find that radio constraints on annihilating dark matter are orders of magnitude less stringent than previously reported, and are generally weaker than those derived from current gamma-ray observations.

  5. Solvable Aggregation-Migration-Annihilation Processes of a Multispecies System

    Institute of Scientific and Technical Information of China (English)

    KE Jian-Hong; LIN Zhen-Quan; CHEN Xiao-Shuang

    2006-01-01

    An aggregation-migration-annihilation model is proposed for a two-species-group system. In the system,aggregation reactions occur between any two aggregates of the same species and migration reactions between two different species in the same group and joint annihilation reactions between two species from different groups. The kinetics of the system is then investigated in the framework of the mean-field theory. It is found that the scaling solutions of the aggregate size distributions depend crucially on the ratios of the equivalent aggregation rates of species groups to the annihilation rates. Each species always scales according to a conventional or modified scaling form; moreover, the governing scaling exponents are nonuniversal and dependent on the reaction details for most cases.

  6. Dark matter annihilation with s-channel internal Higgsstrahlung

    CERN Document Server

    Kumar, Jason; Marfatia, Danny

    2016-01-01

    We study the scenario of fermionic dark matter that annihilates to standard model fermions through an s-channel axial vector mediator. We point out that the well-known chirality suppression of the annihilation cross section can be alleviated by s-channel internal Higgsstrahlung. The shapes of the cosmic ray spectra are identical to that of t-channel internal Higgsstrahlung in the limit of a heavy mediating particle. Unlike the general case of t-channel bremsstrahlung, s-channel Higgsstrahlung can be the dominant annihilation process even for Dirac dark matter. Since the s-channel mediator can be a standard model singlet, collider searches for the mediator are easily circumvented.

  7. Extensions of the Cube Attack Based on Low Degree Annihilators

    Science.gov (United States)

    Zhang, Aileen; Lim, Chu-Wee; Khoo, Khoongming; Wei, Lei; Pieprzyk, Josef

    At Crypto 2008, Shamir introduced a new algebraic attack called the cube attack, which allows us to solve black-box polynomials if we are able to tweak the inputs by varying an initialization vector. In a stream cipher setting where the filter function is known, we can extend it to the cube attack with annihilators: By applying the cube attack to Boolean functions for which we can find low-degree multiples (equivalently annihilators), the attack complexity can be improved. When the size of the filter function is smaller than the LFSR, we can improve the attack complexity further by considering a sliding window version of the cube attack with annihilators. Finally, we extend the cube attack to vectorial Boolean functions by finding implicit relations with low-degree polynomials.

  8. Simulation of structure and annihilation of screw dislocation dipoles

    DEFF Research Database (Denmark)

    Rasmussen, Torben; Vegge, Tejs; Leffers, Torben; Pedersen, O. B.; Jacobsen, Karsten Wedel

    2000-01-01

    . The equilibrium splitting width of the screw dislocations decreases with decreasing dipole height, as expected from elasticity theory. The energy barriers, and corresponding transition states for annihilation of stable dipoles are determined for straight and for flexible dislocations for dipole......Large scale atomistic simulations are used to investigate the properties of screw dislocation dipoles in copper. Spontaneous annihilation is observed for dipole heights less than 1 nm. Equilibrated dipoles of heights larger than 1 nm adopt a skew configuration due to the elastic anisotropy of Cu...... heights up to 5.2 nm. In both cases the annihilation is initiated by cross-slip of one of the dislocations. For straight dislocations the activation energy shows a linear dependence on the inverse dipole height, and for flexible dislocations the dependence is roughly linear for the dipoles investigated....

  9. Applications of positron annihilation to dermatology and skin cancer

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Guang; Chen, Hongmin; Chakka, Lakshmi [Department of Chemistry, University of Missouri-Kansas City, Kansas City, MO 64110 (United States); Gadzia, Joseph E. [Dermatology, Department of Internal Medicine, University of Kansas Medical Center, Kansas City, KS 66103 and Kansas Medical Clinic, Topeka, KS 66614 (United States); Jean, Y.C. [Department of Chemistry, University of Missouri-Kansas City, Kansas City, MO 64110 (United States); R and D Center for Membrane Technology, Chung Yuan Christian University, Chung-Li (China)

    2007-07-01

    Positronium annihilation lifetime experiments have been performed to investigate the interaction between skin cancer and positronium for human skin samples. Positronium annihilation lifetime is found to be shorter and intensity is found to be less for the samples with basal cell carcinoma and squamous cell carcinoma than the normal skin samples. These results indicate a reduction of free volume in the molecular level for the skin with cancer with respect to the skin without cancer. Positron annihilation spectroscopy may be potentially developed as a new noninvasive and external method for dermatology clinics, early detection of cancer, and nano-PET technology in the future. (copyright 2007 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  10. Probably Almost Bayes Decisions

    DEFF Research Database (Denmark)

    Anoulova, S.; Fischer, Paul; Poelt, S.; Simon, H.- U.

    1996-01-01

    In this paper, we investigate the problem of classifying objects which are given by feature vectors with Boolean entries. Our aim is to "(efficiently) learn probably almost optimal classifications" from examples. A classical approach in pattern recognition uses empirical estimations of the Bayesian...

  11. Epistemology and Probability

    CERN Document Server

    Plotnitsky, Arkady

    2010-01-01

    Offers an exploration of the relationships between epistemology and probability in the work of Niels Bohr, Werner Heisenberg, and Erwin Schrodinger; in quantum mechanics; and in modern physics. This book considers the implications of these relationships and of quantum theory for our understanding of the nature of thinking and knowledge in general

  12. Logic, Truth and Probability

    OpenAIRE

    Quznetsov, Gunn

    1998-01-01

    The propositional logic is generalized on the real numbers field. The logical analog of the Bernoulli independent tests scheme is constructed. The variant of the nonstandard analysis is adopted for the definition of the logical function, which has all properties of the classical probability function. The logical analog of the Large Number Law is deduced from properties of this function.

  13. Logic and probability

    OpenAIRE

    Quznetsov, G. A.

    2003-01-01

    The propositional logic is generalized on the real numbers field. The logical analog of the Bernoulli independent tests scheme is constructed. The variant of the nonstandard analysis is adopted for the definition of the logical function, which has all properties of the classical probability function. The logical analog of the Large Number Law is deduced from properties of this function.

  14. Transition probabilities for atoms

    International Nuclear Information System (INIS)

    Current status of advanced theoretical methods for transition probabilities for atoms and ions is discussed. An experiment on the f values of the resonance transitions of the Kr and Xe isoelectronic sequences is suggested as a test for the theoretical methods

  15. Counterexamples in probability

    CERN Document Server

    Stoyanov, Jordan M

    2013-01-01

    While most mathematical examples illustrate the truth of a statement, counterexamples demonstrate a statement's falsity. Enjoyable topics of study, counterexamples are valuable tools for teaching and learning. The definitive book on the subject in regards to probability, this third edition features the author's revisions and corrections plus a substantial new appendix.

  16. Positron annihilation study of vacancy-type defects in Al single crystal foils with the tweed structures across the surface

    Energy Technology Data Exchange (ETDEWEB)

    Kuznetsov, Pavel, E-mail: kpv@ispms.tsc.ru [National Research Tomsk Polytechnic University, Tomsk, 634050 (Russian Federation); Institute of Strength Physics and Materials Science SB RAS, Tomsk, 634055 (Russian Federation); Cizek, Jacub, E-mail: jcizek@mbox.troja.mff.cuni.cz; Hruska, Petr [Charles University in Prague, Praha, CZ-18000 Czech Republic (Czech Republic); Anwad, Wolfgang [Institut für Strahlenphysik, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, D-01314 Germany (Germany); Bordulev, Yuri; Lider, Andrei; Laptev, Roman [National Research Tomsk Polytechnic University, Tomsk, 634050 (Russian Federation); Mironov, Yuri [Institute of Strength Physics and Materials Science SB RAS, Tomsk, 634055 (Russian Federation)

    2015-10-27

    The vacancy-type defects in the aluminum single crystal foils after a series of the cyclic tensions were studied using positron annihilation. Two components were identified in the positron lifetime spectra associated with the annihilation of free positrons and positrons trapped by dislocations. With increasing number of cycles the dislocation density firstly increases and reaches a maximum value at N = 10 000 cycles but then it gradually decreases and at N = 70 000 cycles falls down to the level typical for the virgin samples. The direct evidence on the formation of a two-phase system “defective near-surface layer/base Al crystal” in aluminum foils at cyclic tension was obtained using a positron beam with the variable energy.

  17. Positron annihilation study of vacancy-type defects in Al single crystal foils with the tweed structures across the surface

    International Nuclear Information System (INIS)

    The vacancy-type defects in the aluminum single crystal foils after a series of the cyclic tensions were studied using positron annihilation. Two components were identified in the positron lifetime spectra associated with the annihilation of free positrons and positrons trapped by dislocations. With increasing number of cycles the dislocation density firstly increases and reaches a maximum value at N = 10 000 cycles but then it gradually decreases and at N = 70 000 cycles falls down to the level typical for the virgin samples. The direct evidence on the formation of a two-phase system “defective near-surface layer/base Al crystal” in aluminum foils at cyclic tension was obtained using a positron beam with the variable energy

  18. Negative probability in the framework of combined probability

    OpenAIRE

    Burgin, Mark

    2013-01-01

    Negative probability has found diverse applications in theoretical physics. Thus, construction of sound and rigorous mathematical foundations for negative probability is important for physics. There are different axiomatizations of conventional probability. So, it is natural that negative probability also has different axiomatic frameworks. In the previous publications (Burgin, 2009; 2010), negative probability was mathematically formalized and rigorously interpreted in the context of extende...

  19. A positron annihilation study of damage distribution in C+6 irradiated nickel

    International Nuclear Information System (INIS)

    Positron annihilation technique (PAT) is able to give a good quantitative result for both extended defects and simple defects. The sample thickness must fulfil the special requirement in positron annihilation measurement thus applications of PAT to study of damage distribution induced by heavy ion irradiation are seriously restricted. An attempt to tackle this problem has been made. A stack of samples of pure nickel was irradiated at ambient temperature under a vacuum of 1.33 x 10-4 Pa with 42.5 MeV/A C+6 ions at the Heavy Ion Research Facility of Lanzhou (HIRFL). The total dose was 1.33 x 1015 cm-2 at a current density of 2 x 109 cm-2s-1. Four adjacent samples as a grou whose thickness meets the requirement of PAT were successively taken out in the order of stacking and measured with a fast-fast coincidence system which had a resolution of 240 ps. By analysing the lifetime spectra with computer code, POSITRONFIT-EXTENDED, the distribution of reduced trapping rate, k, and mean lifetime, τ-bar, along the penetration depth of incident ions were obtained. The theoretical calculation of damage distribution was carried out with a computer program, HEDEP-1, that considered the nuclear force and extended the energy range of EDEP-1 to 100 MeV/A. The distribution of k, τ-bar, and calculated damage energy present similar trend but experimental damage peak was obviously widened. Furthermore the peak position is slightly shifted towards the incident direction of ions. These phenomena may be attributed to the average effect of four samples in positron annihilation measurement and some complicated factors introduced by irradiation with ions of such high energy in defect production

  20. Effect of thermal treatment condition on the Ag precipitates in Al–Ag alloy studied by positron annihilation

    International Nuclear Information System (INIS)

    Formation of Ag precipitates in an Al–1 wt%Ag alloy after aging at different temperatures was studied by positron annihilation spectroscopy. It is found that the aggregation of Ag atoms takes place during natural aging process after the Al–Ag alloy was homogenized at 550 °C and quenched to room temperature water. The Ag nanoclusters could trap positrons and thus positron annihilation measurements give information on the precipitation of Ag atoms. After artificial aging at 120 °C, the Ag signal is enhanced, which indicates further aggregation of Ag atoms. However, after artificial aging of the sample at 200 °C, no Ag nanoclusters are observed. Instead, the quenched-in vacancies show gradual recovery during this aging process. This is probably due to the dissolving of Ag clusters into Al matrix at 200 °C. Furthermore, after the sample was first heat treated at 200 °C and then aged at 120 °C, Ag nanoclusters appear again. This implies that the formation of Ag precipitates during natural aging process is assisted by the quenched-in vacancies. Temperature dependence of the positron annihilation measurements indicates that Ag nanoclusters are shallow positron traps, which makes it difficult to observe the real-time Ag precipitation formation by positrons during artificial aging of Al–Ag alloy

  1. Determination of molecular weight in polyvinylmethylsiloxane by positron annihilation

    International Nuclear Information System (INIS)

    In this paper a linear relation between the life time of positron annihilation and molecular weight of weight average in polyvinylmethylsiloxane is simple deduced according to S.Y. Chuang's hypothesis. The experiment results with polyvinylmethylsiloxane show that there is a linear relation between the life time of positron annihilation and molecular weight of weight average. The molecular weight of the polyvinylmethylsiloxane is determined by this linear relation. The result is compared with other results obtained by the viscosimetry method. The difference between them is less than 3.7%. The experimental results is discussed and the authors put forward the method to determine molecular weight

  2. Revisiting Bremsstrahlung emission associated with Light Dark Matter annihilations

    OpenAIRE

    Boehm, C; Uwer, P.

    2006-01-01

    We compute the single bremsstrahlung emission associated with the pair annihilation of spin-0 particles into electrons and positrons, via the t-channel exchange of a heavy fermion. We compare our result with the work of Beacom et al. . Unlike what is stated in the literature, we show that the Bremsstrahlung cross section is not necessarily given by the tree-level annihilation cross section (for a generalized kinematics) times a factor related to the emission of a soft photon. Such a factoriza...

  3. On the effective operators for Dark Matter annihilations

    International Nuclear Information System (INIS)

    We consider effective operators describing Dark Matter (DM) interactions with Standard Model fermions. In the non-relativistic limit of the DM field, the operators can be organized according to their mass dimension and their velocity behaviour, i.e. whether they describe s- or p-wave annihilations. The analysis is carried out for self-conjugate DM (real scalar or Majorana fermion). In this case, the helicity suppression at work in the annihilation into fermions is lifted by electroweak bremsstrahlung. We construct and study all dimension-8 operators encoding such an effect. These results are of interest in indirect DM searches

  4. Simulation of structure and annihilation of screw dislocation dipoles

    DEFF Research Database (Denmark)

    Rasmussen, Torben; Vegge, Tejs; Leffers, Torben; Pedersen, O. B.; Jacobsen, Karsten Wedel

    2000-01-01

    Large scale atomistic simulations are used to investigate the properties of screw dislocation dipoles in copper. Spontaneous annihilation is observed for dipole heights less than 1 nm. Equilibrated dipoles of heights larger than 1 nm adopt a skew configuration due to the elastic anisotropy of Cu....... The equilibrium splitting width of the screw dislocations decreases with decreasing dipole height, as expected from elasticity theory. The energy barriers, and corresponding transition states for annihilation of stable dipoles are determined for straight and for flexible dislocations for dipole...

  5. Dark matter annihilation with s-channel internal Higgsstrahlung

    OpenAIRE

    Jason Kumar(Univ. of Hawaii); Jiajun Liao; Danny Marfatia

    2016-01-01

    We study the scenario of fermionic dark matter that annihilates to standard model fermions through an s-channel axial vector mediator. We point out that the well-known chirality suppression of the annihilation cross section can be alleviated by s-channel internal Higgsstrahlung. The shapes of the cosmic ray spectra are identical to that of t-channel internal Higgsstrahlung in the limit of a heavy mediating particle. Unlike the general case of t-channel bremsstrahlung, s-channel Higgsstrahlung...

  6. Annihilation physics of exotic galactic dark matter particles

    Science.gov (United States)

    Stecker, F. W.

    1990-01-01

    Various theoretical arguments make exotic heavy neutral weakly interacting fermions, particularly those predicted by supersymmetry theory, attractive candidates for making up the large amount of unseen gravitating mass in galactic halos. Such particles can annihilate with each other, producing secondary particles of cosmic-ray energies, among which are antiprotons, positrons, neutrinos, and gamma-rays. Spectra and fluxes of these annihilation products can be calculated, partly by making use of positron electron collider data and quantum chromodynamic models of particle production derived therefrom. These spectra may provide detectable signatures of exotic particle remnants of the big bang.

  7. On the Direct Detection of Dark Matter Annihilation

    DEFF Research Database (Denmark)

    Cherry, John F.; Frandsen, Mads T.; Shoemaker, Ian M.

    2015-01-01

    We investigate the direct detection phenomenology of a class of dark matter (DM) models in which DM does not directly interact with nuclei, {but rather} the products of its annihilation do. When these annihilation products are very light compared to the DM mass, the scattering in direct detection...... cross sections has already been reached in a class of models. Moreover, the compatibility of dark matter direct detection experiments can be compared directly in $E_{{\\rm min}}$ space without making assumptions about DM astrophysics, mass, or scattering form factors. Lastly, when DM has direct couplings...

  8. Pion production on exotic nuclei by antiproton annihilation

    International Nuclear Information System (INIS)

    Pion production by antiproton annihilation on neutron- or proton-rich nuclei is studied in a fully quantum mechanical description of the reaction process in terms of a distorted wave approach. The elementary elastic N-bar N and the N-bar N → nπ annihilation vertices are obtained from a t-matrix accounting for coupled channels effects. The theoretical amplitudes are used to derive a microscopic N-bar A optical model potential and the meson production vertices. Nuclear structure effects are taken into account microscopically. Results for meson production cross sections are presented.

  9. Positron annihilation studies on SmS and Smsub(0.85)Ysub(0.15)S

    International Nuclear Information System (INIS)

    Angular distribution of annihilation photons has been measured in SmS and Smsub(0.85) Ysub(0.15)S. The distribution curves show that SmS is ionic at NTP and Smsub(0.85) Ysub(0.15)S is metallic with an intermediate valence of 2.73 for samarium ion. The large density of states due to the f-level appears around 3 mrad in the momentum density distribution. The results are in agreement with the available Mossbauer and lattice constant data. (author)

  10. X-ray photoelectron spectroscopy and positron annihilation spectroscopy analysis of surfactant affected FePt spintronic films

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Chun, E-mail: fengchun@ustb.edu.cn [Department of Materials Physics and Chemistry, University of Science and Technology Beijing, Beijing 100083 (China); Li, Xujing; Liu, Fen; Wang, Qiang [Department of Materials Physics and Chemistry, University of Science and Technology Beijing, Beijing 100083 (China); Yang, Meiyin [Department of Materials Physics and Chemistry, University of Science and Technology Beijing, Beijing 100083 (China); The Center for Micromagnetics and Information Technologies (MINT) and Department of Electrical and Computer Engineering, University of Minnesota, 200 Union St SE, Minneapolis, MN 55455 (United States); Zhao, Chongjun [Department of Materials Physics and Chemistry, University of Science and Technology Beijing, Beijing 100083 (China); Gong, Kui [Centre for the Physics of Materials and Department of Physics, McGill University, Montreal, Quebec, H3A2T8 Canada (Canada); Zhang, Peng; Wang, Bao-Yi; Cao, Xing-Zhong [Key Laboratory of Nuclear Analysis Techniques, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China); Yu, Guanghua [Department of Materials Physics and Chemistry, University of Science and Technology Beijing, Beijing 100083 (China)

    2014-07-01

    This paper reports the effects of surfactant Bi atomic diffusion on the microstructure evolution and resulted property manipulation in FePt spintronic films by the quantitative studies of X-ray photoelectron spectroscopy and positron annihilation spectroscopy. The defect density in the FePt layer, which was tunable by varying the thermal treatment temperatures, was found to be remarkably enhanced correlated with the Bi atomic diffusion behavior. The observed defect density evolution substantially favors Fe(Pt) atomic migrations and lowers the energy barrier for atomic ordering transition, resulting in a great improvement of hard magnet property of the films.

  11. X-ray photoelectron spectroscopy and positron annihilation spectroscopy analysis of surfactant affected FePt spintronic films

    International Nuclear Information System (INIS)

    This paper reports the effects of surfactant Bi atomic diffusion on the microstructure evolution and resulted property manipulation in FePt spintronic films by the quantitative studies of X-ray photoelectron spectroscopy and positron annihilation spectroscopy. The defect density in the FePt layer, which was tunable by varying the thermal treatment temperatures, was found to be remarkably enhanced correlated with the Bi atomic diffusion behavior. The observed defect density evolution substantially favors Fe(Pt) atomic migrations and lowers the energy barrier for atomic ordering transition, resulting in a great improvement of hard magnet property of the films.

  12. Exact Bures Probabilities of Separability

    CERN Document Server

    Slater, P B

    1999-01-01

    We reexamine the question of what constitutes the conditional Bures or "quantum Jeffreys" prior for a certain four-dimensional convex subset (P) of the eight-dimensional convex set (Q) of 3 x 3 density matrices (rho_{Q}). We find that two competing procedures yield related but not identical priors - the prior previously reported (J. Phys. A 29, L271 [1996]) being normalizable over P, the new prior here, not. Both methods rely upon the same formula of Dittmann for the Bures metric tensor g, but differ in the parameterized form of rho employed. In the earlier approach, the input is a member of P, that is rho_{P}, while here it is rho_{Q}, and only after this computation is the conditioning on P performed. Then, we investigate several one-dimensional subsets of the fifteen-dimensional set of 4 x 4 density matrices, to which we apply, in particular, the first methodology. Doing so, we determine exactly the conditional Bures probabilities of separability into product states of 2 x 2 density matrices. We find that ...

  13. Antiproton annihilation physics annihilation physics in the Monte Carlo particle transport code particle transport code SHIELD-HIT12A

    DEFF Research Database (Denmark)

    Taasti, Vicki Trier; Knudsen, Helge; Holzscheiter, Michael;

    2015-01-01

    experimental depth dose curve obtained by the AD-4/ACE collaboration was compared with an earlier version of SHIELD-HIT, but since then inelastic annihilation cross sections for antiprotons have been updated and a more detailed geometric model of the AD-4/ACE experiment was applied. Furthermore, the Fermi...... cross sections, which restores the agreement, but some small deviations still remain. Best agreement is achieved by using the most recent antiproton collision cross sections and the Fermi–Teller Z-law, even if experimental data conclude that the Z-law is inadequately describing annihilation on compounds....... We conclude that more experimental cross section data are needed in the lower energy range in order to resolve this contradiction, ideally combined with more rigorous models for annihilation on compounds....

  14. Origin and annihilation physics of positrons in the Galaxy

    International Nuclear Information System (INIS)

    A gamma radiation at 511 keV is observed since the early 1970's toward the Galactic bulge region. This emission is the signature of a large number of electron-positron annihilations, the positron being the electron's antiparticle. Unfortunately, the origin of the positrons responsible for this emission is still a mystery. Many positron-source candidates have been suggested but none of them can account for the galactic annihilation emission. The spatial distribution of this emission is indeed very atypical. Since 2002, the SPI spectrometer onboard the INTEGRAL space laboratory revealed an emission strongly concentrated toward the galactic bulge and a weaker emission from the galactic disk. This morphology is unusual because it does not correspond to any of the known galactic astrophysical-object or interstellar-matter distributions. The assumption that positrons annihilate close to their sources (i.e. the spatial distribution of the annihilation emission reflects the spatial distribution of the sources) has consequently been called into question. Recent studies suggest that positrons could propagate far away from their sources before annihilating. This physical aspect could be the key point to solve the riddle of the galactic positron origin. This thesis is devoted to the modelling of the propagation and annihilation of positrons in the Galaxy, in order to compare simulated spatial models of the annihilation emission with recent measurements provided by SPI/INTEGRAL. This method allows to put constraints on the origin of galactic positrons. We therefore developed a propagation Monte-Carlo code of positrons within the Galaxy in which we implemented all the theoretical and observational knowledge about positron physics (sources, transport modes, energy losses, annihilation modes) and the interstellar medium of our Galaxy (interstellar gas distributions, galactic magnetic fields, structures of the gaseous phases). Due to uncertainties in several physical parameters

  15. Studies of micellar and microemulsion systems by positron annihilation technique and their relevance to emulsion counting of weak beta emitters

    International Nuclear Information System (INIS)

    Positron annihilation technique was applied to the study of structural aspects in micellar and microemulsion systems. The formation of positronium is highly affected by the nature as well as the microproperties of the aggregates present in solution. In three-component (surfactant-co-surfactant-water) reversed micellar solutions and four-component (surfactant-co-surfactant-hydrocarbon-water) water-in-oil microemulsions, the formation of small aggregates, spherical micelles or elongated micelles is sensitively identified by positron annihilation technique. The correlation between the variations of the positronium formation probability on increasing water content and the nature of the different aggregates which are formed was used to determine the onset of association and the structural rearrangements occurring in many dispersion systems. Factors affecting the formation of microemulsions were investigated. Measurements in mixed normal micelles of short chain length alcohol and anionic surfactant demonstrate the potential of the technique to investigate small changes in the properties of such species. The greater sensitivity of positron annihilation technique when compared to conventional methods is also demonstrated. Measurement of both annihilation parameters and counting efficiency of tritium (weak beta emitter) in water-containing reversed micellar and microemulsion systems indicate that the fate of the beta particles (electrons), as measured in liquid scintillation counting, is similar to that of energetic positrons, as monitored by positron life-time measurements. Positronium formation process and the phenomenon responsible for the weak beta counting efficiency are explained on the basis of trapping of energetic positrons and electrons, by aggregates of sufficient polarity. Both processes provide very sensitive probe for the study of structural changes in micellar solutions and microemulsions

  16. Waste Package Misload Probability

    Energy Technology Data Exchange (ETDEWEB)

    J.K. Knudsen

    2001-11-20

    The objective of this calculation is to calculate the probability of occurrence for fuel assembly (FA) misloads (i.e., Fa placed in the wrong location) and FA damage during FA movements. The scope of this calculation is provided by the information obtained from the Framatome ANP 2001a report. The first step in this calculation is to categorize each fuel-handling events that occurred at nuclear power plants. The different categories are based on FAs being damaged or misloaded. The next step is to determine the total number of FAs involved in the event. Using the information, a probability of occurrence will be calculated for FA misload and FA damage events. This calculation is an expansion of preliminary work performed by Framatome ANP 2001a.

  17. Contributions to quantum probability

    International Nuclear Information System (INIS)

    Chapter 1: On the existence of quantum representations for two dichotomic measurements. Under which conditions do outcome probabilities of measurements possess a quantum-mechanical model? This kind of problem is solved here for the case of two dichotomic von Neumann measurements which can be applied repeatedly to a quantum system with trivial dynamics. The solution uses methods from the theory of operator algebras and the theory of moment problems. The ensuing conditions reveal surprisingly simple relations between certain quantum-mechanical probabilities. It also shown that generally, none of these relations holds in general probabilistic models. This result might facilitate further experimental discrimination between quantum mechanics and other general probabilistic theories. Chapter 2: Possibilistic Physics. I try to outline a framework for fundamental physics where the concept of probability gets replaced by the concept of possibility. Whereas a probabilistic theory assigns a state-dependent probability value to each outcome of each measurement, a possibilistic theory merely assigns one of the state-dependent labels ''possible to occur'' or ''impossible to occur'' to each outcome of each measurement. It is argued that Spekkens' combinatorial toy theory of quantum mechanics is inconsistent in a probabilistic framework, but can be regarded as possibilistic. Then, I introduce the concept of possibilistic local hidden variable models and derive a class of possibilistic Bell inequalities which are violated for the possibilistic Popescu-Rohrlich boxes. The chapter ends with a philosophical discussion on possibilistic vs. probabilistic. It can be argued that, due to better falsifiability properties, a possibilistic theory has higher predictive power than a probabilistic one. Chapter 3: The quantum region for von Neumann measurements with postselection. It is determined under which conditions a probability distribution on a finite set can occur as the outcome

  18. Contributions to quantum probability

    Energy Technology Data Exchange (ETDEWEB)

    Fritz, Tobias

    2010-06-25

    Chapter 1: On the existence of quantum representations for two dichotomic measurements. Under which conditions do outcome probabilities of measurements possess a quantum-mechanical model? This kind of problem is solved here for the case of two dichotomic von Neumann measurements which can be applied repeatedly to a quantum system with trivial dynamics. The solution uses methods from the theory of operator algebras and the theory of moment problems. The ensuing conditions reveal surprisingly simple relations between certain quantum-mechanical probabilities. It also shown that generally, none of these relations holds in general probabilistic models. This result might facilitate further experimental discrimination between quantum mechanics and other general probabilistic theories. Chapter 2: Possibilistic Physics. I try to outline a framework for fundamental physics where the concept of probability gets replaced by the concept of possibility. Whereas a probabilistic theory assigns a state-dependent probability value to each outcome of each measurement, a possibilistic theory merely assigns one of the state-dependent labels ''possible to occur'' or ''impossible to occur'' to each outcome of each measurement. It is argued that Spekkens' combinatorial toy theory of quantum mechanics is inconsistent in a probabilistic framework, but can be regarded as possibilistic. Then, I introduce the concept of possibilistic local hidden variable models and derive a class of possibilistic Bell inequalities which are violated for the possibilistic Popescu-Rohrlich boxes. The chapter ends with a philosophical discussion on possibilistic vs. probabilistic. It can be argued that, due to better falsifiability properties, a possibilistic theory has higher predictive power than a probabilistic one. Chapter 3: The quantum region for von Neumann measurements with postselection. It is determined under which conditions a probability distribution on a

  19. Measurement uncertainty and probability

    CERN Document Server

    Willink, Robin

    2013-01-01

    A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.

  20. Waste Package Misload Probability

    International Nuclear Information System (INIS)

    The objective of this calculation is to calculate the probability of occurrence for fuel assembly (FA) misloads (i.e., Fa placed in the wrong location) and FA damage during FA movements. The scope of this calculation is provided by the information obtained from the Framatome ANP 2001a report. The first step in this calculation is to categorize each fuel-handling events that occurred at nuclear power plants. The different categories are based on FAs being damaged or misloaded. The next step is to determine the total number of FAs involved in the event. Using the information, a probability of occurrence will be calculated for FA misload and FA damage events. This calculation is an expansion of preliminary work performed by Framatome ANP 2001a