WorldWideScience

Sample records for annihilation probability density

  1. Annihilation probability density and other applications of the Schwinger multichannel method to the positron and electron scattering

    International Nuclear Information System (INIS)

    We have calculated annihilation probability densities (APD) for positron collisions against He atom and H2 molecule. It was found that direct annihilation prevails at low energies, while annihilation following virtual positronium (Ps) formation is the dominant mechanism at higher energies. In room-temperature collisions (10-2 eV) the APD spread over a considerable extension, being quite similar to the electronic densities of the targets. The capture of the positron in an electronic Feshbach resonance strongly enhanced the annihilation rate in e+-H2 collisions. We also discuss strategies to improve the calculation of the annihilation parameter (Zeff ), after debugging the computational codes of the Schwinger Multichannel Method (SMC). Finally, we consider the inclusion of the Ps formation channel in the SMC and show that effective configurations (pseudo eigenstates of the Hamiltonian of the collision ) are able to significantly reduce the computational effort in positron scattering calculations. Cross sections for electron scattering by polyatomic molecules were obtained in three different approximations: static-exchange (SE); tatic-exchange-plus-polarization (SEP); and multichannel coupling. The calculations for polar targets were improved through the rotational resolution of scattering amplitudes in which the SMC was combined with the first Born approximation (FBA). In general, elastic cross sections (SE and SEP approximations) showed good agreement with available experimental data for several targets. Multichannel calculations for e- -H2O scattering, on the other hand, presented spurious structures at the electronic excitation thresholds (author)

  2. Annihilation probability density and other applications of the Schwinger multichannel method to the positron and electron scattering; Densidade de probabilidade de aniquilacao e outras aplicacoes do metodo multicanal de Schwinger ao espalhamento de positrons e eletrons

    Energy Technology Data Exchange (ETDEWEB)

    Varella, Marcio Teixeira do Nascimento

    2001-12-15

    We have calculated annihilation probability densities (APD) for positron collisions against He atom and H{sub 2} molecule. It was found that direct annihilation prevails at low energies, while annihilation following virtual positronium (Ps) formation is the dominant mechanism at higher energies. In room-temperature collisions (10{sup -2} eV) the APD spread over a considerable extension, being quite similar to the electronic densities of the targets. The capture of the positron in an electronic Feshbach resonance strongly enhanced the annihilation rate in e{sup +}-H{sub 2} collisions. We also discuss strategies to improve the calculation of the annihilation parameter (Z{sub eff} ), after debugging the computational codes of the Schwinger Multichannel Method (SMC). Finally, we consider the inclusion of the Ps formation channel in the SMC and show that effective configurations (pseudo eigenstates of the Hamiltonian of the collision ) are able to significantly reduce the computational effort in positron scattering calculations. Cross sections for electron scattering by polyatomic molecules were obtained in three different approximations: static-exchange (SE); tatic-exchange-plus-polarization (SEP); and multichannel coupling. The calculations for polar targets were improved through the rotational resolution of scattering amplitudes in which the SMC was combined with the first Born approximation (FBA). In general, elastic cross sections (SE and SEP approximations) showed good agreement with available experimental data for several targets. Multichannel calculations for e{sup -} -H{sub 2}O scattering, on the other hand, presented spurious structures at the electronic excitation thresholds (author)

  3. Trajectory probability hypothesis density filter

    OpenAIRE

    García-Fernández, Ángel F.; Svensson, Lennart

    2016-01-01

    This paper presents the probability hypothesis density (PHD) filter for sets of trajectories. The resulting filter, which is referred to as trajectory probability density filter (TPHD), is capable of estimating trajectories in a principled way without requiring to evaluate all measurement-to-target association hypotheses. As the PHD filter, the TPHD filter is based on recursively obtaining the best Poisson approximation to the multitrajectory filtering density in the sense of minimising the K...

  4. Probability densities and Lévy densities

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler

    For positive Lévy processes (i.e. subordinators) formulae are derived that express the probability density or the distribution function in terms of power series in time t. The applicability of the results to finance and to turbulence is briefly indicated.......For positive Lévy processes (i.e. subordinators) formulae are derived that express the probability density or the distribution function in terms of power series in time t. The applicability of the results to finance and to turbulence is briefly indicated....

  5. Trajectory versus probability density entropy

    Science.gov (United States)

    Bologna, Mauro; Grigolini, Paolo; Karagiorgis, Markos; Rosa, Angelo

    2001-07-01

    We show that the widely accepted conviction that a connection can be established between the probability density entropy and the Kolmogorov-Sinai (KS) entropy is questionable. We adopt the definition of density entropy as a functional of a distribution density whose time evolution is determined by a transport equation, conceived as the only prescription to use for the calculation. Although the transport equation is built up for the purpose of affording a picture equivalent to that stemming from trajectory dynamics, no direct use of trajectory time evolution is allowed, once the transport equation is defined. With this definition in mind we prove that the detection of a time regime of increase of the density entropy with a rate identical to the KS entropy is possible only in a limited number of cases. The proposals made by some authors to establish a connection between the two entropies in general, violate our definition of density entropy and imply the concept of trajectory, which is foreign to that of density entropy.

  6. On Explicit Probability Densities Associated with Fuss-Catalan Numbers

    OpenAIRE

    Liu, Dang-Zheng; Song, Chunwei; Wang, Zheng-Dong

    2010-01-01

    In this note we give explicitly a family of probability densities, the moments of which are Fuss-Catalan numbers. The densities appear naturally in random matrices, free probability and other contexts.

  7. Annihilation Radiation Gauge for Relative Density and Multiphase Fluid Monitoring

    Directory of Open Access Journals (Sweden)

    Vidal A.

    2014-03-01

    Full Text Available The knowledge of the multi-phase flow parameters are important for the petroleum industry, specifically during the transport in pipelines and network related to exploitation’s wells. Crude oil flow is studied by Monte Carlo simulation and experimentally to determine transient liquid phase in a laboratory system. Relative density and fluid phase time variation is monitored employing a fast nuclear data acquisition setup that includes two large volume BaF2 scintillator detectors coupled to an electronic chain and data display in a LabView® environment. Fluid parameters are determined by the difference in count rate of coincidence pulses. The operational characteristics of the equipment indicate that 2 % deviation in the CCR corresponds to a variation, on average, of 20 % in the fraction of liquid of the multiphase fluid.

  8. Hilbert Space of Probability Density Functions Based on Aitchison Geometry

    Institute of Scientific and Technical Information of China (English)

    J. J. EGOZCUE; J. L. D(I)AZ-BARRERO; V. PAWLOWSKY-GLAHN

    2006-01-01

    The set of probability functions is a convex subset of L1 and it does not have a linear space structure when using ordinary sum and multiplication by real constants. Moreover, difficulties arise when dealing with distances between densities. The crucial point is that usual distances are not invariant under relevant transformations of densities. To overcome these limitations, Aitchison's ideas on compositional data analysis are used, generalizing perturbation and power transformation, as well as the Aitchison inner product, to operations on probability density functions with support on a finite interval. With these operations at hand, it is shown that the set of bounded probability density functions on finite intervals is a pre-Hilbert space. A Hilbert space of densities, whose logarithm is square-integrable, is obtained as the natural completion of the pre-Hilbert space.

  9. A Probability Density Function for Neutrino Masses and Mixings

    CERN Document Server

    Fortin, Jean-François; Marleau, Luc

    2016-01-01

    The anarchy principle leading to the see-saw ensemble is studied analytically with the usual tools of random matrix theory. The probability density function for the see-saw ensemble of $N\\times N$ matrices is obtained in terms of a multidimensional integral. This integral involves all light neutrino masses, leading to a complicated probability density function. It is shown that the probability density function for the neutrino mixing angles and phases is the appropriate Haar measure. The decoupling of the light neutrino masses and neutrino mixings implies no correlation between the neutrino mass eigenstates and the neutrino mixing matrix, in contradiction with observations but in agreement with some of the claims found in the literature.

  10. Does the probability density imply the equation of motion?

    International Nuclear Information System (INIS)

    Full text: The laws of physics dictate the evolution of matter and radiation. Quantum mechanics postulates that the matter or radiation is associated with a field whose magnitude is interpreted as the probability density, which is the only observable quantity. In general this field is either a single-component or multi-component complex scalar field, whose laws of evolution may be expressed in the form of partial differential equations. One may ask does the probability density of the complex scalar field imply the evolution of the field? Here we answer this fundamental question by examining a means for measuring the equation of motion of a single-component complex scalar field associated with a non-dissipative and nonlinear system, given measurements of the probability density. Applications of this formalism, to a number of systems in condensed matter physics, will be discussed

  11. Probability density function modeling for sub-powered interconnects

    Science.gov (United States)

    Pater, Flavius; Amaricǎi, Alexandru

    2016-06-01

    This paper proposes three mathematical models for reliability probability density function modeling the interconnect supplied at sub-threshold voltages: spline curve approximations, Gaussian models,and sine interpolation. The proposed analysis aims at determining the most appropriate fitting for the switching delay - probability of correct switching for sub-powered interconnects. We compare the three mathematical models with the Monte-Carlo simulations of interconnects for 45 nm CMOS technology supplied at 0.25V.

  12. Nonparametric probability density estimation by optimization theoretic techniques

    Science.gov (United States)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  13. Impact of SUSY-QCD corrections on neutralino-stop co-annihilation and the neutralino relic density

    Energy Technology Data Exchange (ETDEWEB)

    Harz, Julia [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Herrmann, Bjoern [Savoie Univ./CNRS, Annecy-le-Vieux (France). LAPTh; Klasen, Michael [Muenster Univ. (Germany). Inst. fuer Theoretische Physik 1; Kovarik, Karol [Karlsruher Institut fuer Technologie, Karlsruhe (Germany). Inst. fuer Theoretische Physik; Le Boulc' h, Quentin [Grenoble Univ./CNRS-IN2P3/INPG, Grenoble (France). Lab. de Physique Subatomique et de Cosmologie

    2013-02-15

    We have calculated the full O({alpha}{sub s}) supersymmetric QCD corrections to neutralino-stop coannihilation into electroweak vector and Higgs bosons within the Minimal Supersymmetric Standard Model (MSSM).We performed a parameter study within the phenomenological MSSM and demonstrated that the studied co-annihilation processes are phenomenologically relevant, especially in the context of a 126 GeV Higgs-like particle. By means of an example scenario we discuss the effect of the full next-to-leading order corrections on the co-annihilation cross section and show their impact on the predicted neutralino relic density. We demonstrate that the impact of these corrections on the cosmologically preferred region of parameter space is larger than the current experimental uncertainty of WMAP data.

  14. Can the relic density of self-interacting dark matter be due to annihilations into Standard Model particles?

    CERN Document Server

    Chu, Xiaoyong; Hambye, Thomas

    2016-01-01

    Motivated by the hypothesis that dark matter self-interactions provide a solution to the small-scale structure formation problems, we investigate the possibilities that the relic density of a self-interacting dark matter candidate can proceed from the thermal freeze-out of annihilations into Standard Model particles. We find that scalar and Majorana dark matter in the mass range of $10-500$ MeV, coupled to a slightly heavier massive gauge boson, are the only possible candidates in agreement with multiple current experimental constraints. Here dark matter annihilations take place at a much slower rate than the self-interactions simply because the interaction connecting the Standard Model and the dark matter sectors is small. We also discuss prospects of establishing or excluding these two scenarios in future experiments.

  15. Vehicle Detection Based on Probability Hypothesis Density Filter

    Science.gov (United States)

    Zhang, Feihu; Knoll, Alois

    2016-01-01

    In the past decade, the developments of vehicle detection have been significantly improved. By utilizing cameras, vehicles can be detected in the Regions of Interest (ROI) in complex environments. However, vision techniques often suffer from false positives and limited field of view. In this paper, a LiDAR based vehicle detection approach is proposed by using the Probability Hypothesis Density (PHD) filter. The proposed approach consists of two phases: the hypothesis generation phase to detect potential objects and the hypothesis verification phase to classify objects. The performance of the proposed approach is evaluated in complex scenarios, compared with the state-of-the-art. PMID:27070621

  16. On singular probability densities generated by extremal dynamics

    OpenAIRE

    Garcia, Guilherme J. M.; Dickman, Ronald

    2003-01-01

    Extremal dynamics is the mechanism that drives the Bak-Sneppen model into a (self-organized) critical state, marked by a singular stationary probability density $p(x)$. With the aim of understanding this phenomenon, we study the BS model and several variants via mean-field theory and simulation. In all cases, we find that $p(x)$ is singular at one or more points, as a consequence of extremal dynamics. Furthermore we show that the extremal barrier $x_i$ always belongs to the `prohibited' inter...

  17. INTERACTIVE VISUALIZATION OF PROBABILITY AND CUMULATIVE DENSITY FUNCTIONS

    KAUST Repository

    Potter, Kristin

    2012-01-01

    The probability density function (PDF), and its corresponding cumulative density function (CDF), provide direct statistical insight into the characterization of a random process or field. Typically displayed as a histogram, one can infer probabilities of the occurrence of particular events. When examining a field over some two-dimensional domain in which at each point a PDF of the function values is available, it is challenging to assess the global (stochastic) features present within the field. In this paper, we present a visualization system that allows the user to examine two-dimensional data sets in which PDF (or CDF) information is available at any position within the domain. The tool provides a contour display showing the normed difference between the PDFs and an ansatz PDF selected by the user and, furthermore, allows the user to interactively examine the PDF at any particular position. Canonical examples of the tool are provided to help guide the reader into the mapping of stochastic information to visual cues along with a description of the use of the tool for examining data generated from an uncertainty quantification exercise accomplished within the field of electrophysiology.

  18. Probability density function transformation using seeded localized averaging

    International Nuclear Information System (INIS)

    Seeded Localized Averaging (SLA) is a spectrum acquisition method that averages pulse-heights in dynamic windows. SLA sharpens peaks in the acquired spectra. This work investigates the transformation of the original probability density function (PDF) in the process of applying SLA procedure. We derive an analytical expression for the resulting probability density function after an application of SLA. In addition, we prove the following properties: 1) for symmetric distributions, SLA preserves both the mean and symmetry. 2) for uni-modal symmetric distributions, SLA reduces variance, sharpening the distributions peak. Our results are the first to prove these properties, reinforcing past experimental observations. Specifically, our results imply that in the typical case of a spectral peak with Gaussian PDF the full width at half maximum (FWHM) of the transformed peak becomes narrower even with averaging of only two pulse-heights. While the Gaussian shape is no longer preserved, our results include an analytical expression for the resulting distribution. Examples of the transformation of other PDFs are presented. (authors)

  19. Probability Density and CFAR Threshold Estimation for Hyperspectral Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Clark, G A

    2004-09-21

    The work reported here shows the proof of principle (using a small data set) for a suite of algorithms designed to estimate the probability density function of hyperspectral background data and compute the appropriate Constant False Alarm Rate (CFAR) matched filter decision threshold for a chemical plume detector. Future work will provide a thorough demonstration of the algorithms and their performance with a large data set. The LASI (Large Aperture Search Initiative) Project involves instrumentation and image processing for hyperspectral images of chemical plumes in the atmosphere. The work reported here involves research and development on algorithms for reducing the false alarm rate in chemical plume detection and identification algorithms operating on hyperspectral image cubes. The chemical plume detection algorithms to date have used matched filters designed using generalized maximum likelihood ratio hypothesis testing algorithms [1, 2, 5, 6, 7, 12, 10, 11, 13]. One of the key challenges in hyperspectral imaging research is the high false alarm rate that often results from the plume detector [1, 2]. The overall goal of this work is to extend the classical matched filter detector to apply Constant False Alarm Rate (CFAR) methods to reduce the false alarm rate, or Probability of False Alarm P{sub FA} of the matched filter [4, 8, 9, 12]. A detector designer is interested in minimizing the probability of false alarm while simultaneously maximizing the probability of detection P{sub D}. This is summarized by the Receiver Operating Characteristic Curve (ROC) [10, 11], which is actually a family of curves depicting P{sub D} vs. P{sub FA}parameterized by varying levels of signal to noise (or clutter) ratio (SNR or SCR). Often, it is advantageous to be able to specify a desired P{sub FA} and develop a ROC curve (P{sub D} vs. decision threshold r{sub 0}) for that case. That is the purpose of this work. Specifically, this work develops a set of algorithms and MATLAB

  20. Accurate photometric redshift probability density estimation - method comparison and application

    CERN Document Server

    Rau, Markus Michael; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben

    2015-01-01

    We introduce an ordinal classification algorithm for photometric redshift estimation, which vastly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, that can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitudes less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular Neural Network code (ANNz). In our use case, this improvemen...

  1. Shape control on probability density function in stochastic systems

    Institute of Scientific and Technical Information of China (English)

    Lingzhi Wang; Fucai Qian; Jun Liu

    2014-01-01

    A novel strategy of probability density function (PDF) shape control is proposed in stochastic systems. The control er is designed whose parameters are optimal y obtained through the improved particle swarm optimization algorithm. The parameters of the control er are viewed as the space position of a particle in particle swarm optimization algorithm and updated continual y until the control er makes the PDF of the state variable as close as possible to the expected PDF. The proposed PDF shape control technique is compared with the equivalent linearization technique through simulation experiments. The results show the superiority and the effectiveness of the proposed method. The control er is excellent in making the state PDF fol ow the expected PDF and has the very smal error between the state PDF and the expected PDF, solving the control problem of the PDF shape in stochastic systems effectively.

  2. Can the relic density of self-interacting dark matter be due to annihilations into Standard Model particles?

    OpenAIRE

    Chu, Xiaoyong; Garcia-Cely, Camilo; Hambye, Thomas

    2016-01-01

    Motivated by the hypothesis that dark matter self-interactions provide a solution to the small-scale structure formation problems, we investigate the possibilities that the relic density of a self-interacting dark matter candidate can proceed from the thermal freeze-out of annihilations into Standard Model particles. We find that scalar and Majorana dark matter in the mass range of $10-500$ MeV, coupled to a slightly heavier massive gauge boson, are the only possible candidates in agreement w...

  3. Downward Price Rigidity of the Japanese CPI -- Analysis by Probability Density Functions and Spatial Density Functions

    OpenAIRE

    Munehisa Kasuya

    1999-01-01

    We define downward price rigidity as the state in which the speed at which prices fall is slower than that in which they rise. Based on this definition, we examine the downward price rigidity of each item that constitutes the core CPI of Japan. That is, according to the results of fractional integration tests on price changes of individual items, we estimate probability density functions in the stationary case and estimate spatial density functions in the nonstationary case. We also test thei...

  4. Interactive design of probability density functions for shape grammars

    KAUST Repository

    Dang, Minh

    2015-11-02

    A shape grammar defines a procedural shape space containing a variety of models of the same class, e.g. buildings, trees, furniture, airplanes, bikes, etc. We present a framework that enables a user to interactively design a probability density function (pdf) over such a shape space and to sample models according to the designed pdf. First, we propose a user interface that enables a user to quickly provide preference scores for selected shapes and suggest sampling strategies to decide which models to present to the user to evaluate. Second, we propose a novel kernel function to encode the similarity between two procedural models. Third, we propose a framework to interpolate user preference scores by combining multiple techniques: function factorization, Gaussian process regression, autorelevance detection, and l1 regularization. Fourth, we modify the original grammars to generate models with a pdf proportional to the user preference scores. Finally, we provide evaluations of our user interface and framework parameters and a comparison to other exploratory modeling techniques using modeling tasks in five example shape spaces: furniture, low-rise buildings, skyscrapers, airplanes, and vegetation.

  5. Parameterizing deep convection using the assumed probability density function method

    Directory of Open Access Journals (Sweden)

    R. L. Storer

    2014-06-01

    Full Text Available Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.

  6. Parameterizing deep convection using the assumed probability density function method

    Energy Technology Data Exchange (ETDEWEB)

    Storer, R. L.; Griffin, B. M.; Hoft, Jan; Weber, J. K.; Raut, E.; Larson, Vincent E.; Wang, Minghuai; Rasch, Philip J.

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.

  7. Failure Analysis of Wind Turbines by Probability Density Evolution Method

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Liu, W.F.;

    2013-01-01

    The aim of this study is to present an efficient and accurate method for estimation of the failure probability of wind turbine structures which work under turbulent wind load. The classical method for this is to fit one of the extreme value probability distribution functions to the extracted maxi...

  8. The stationary probability density of a class of bounded Markov processes

    OpenAIRE

    Ramli, Muhamad Azfar; Leng, Gerard

    2010-01-01

    In this paper we generalize a bounded Markov process, described by Stoyanov and Pacheco-González for a class of transition probability functions. A recursive integral equation for the probability density of these bounded Markov processes is derived and the stationary probability density is obtained by solving an equivalent differential equation. Examples of stationary densities for different transition probability functions are given and an application for designing a roboti...

  9. Fitting age-specific fertility rates by a skew-symmetric probability density function

    OpenAIRE

    Mazzuco, Stefano; Scarpa, Bruno

    2011-01-01

    Mixture probability density functions had recently been proposed to describe some fertility patterns characterized by a bi-modal shape. These mixture probability density functions appear to be adequate when the fertility pattern is actually bi-modal but less useful when the shape of age-specific fertility rates is unimodal. A further model is proposed based on skew-symmetric probability density functions. This model is both more parsimonious than mixture distributions and more flexible, sh...

  10. Microdefects and electron densities in NiTi shape memory alloys studied by positron annihilation

    Institute of Scientific and Technical Information of China (English)

    HU Yi-feng; DENG Wen; HAO Wen-bo; YUE Li; HUANG Le; HUANG Yu-yang; XIONG Liang-yue

    2006-01-01

    The microdefects and free electron densities in B2, R and B19' phases of Nis0.78Ti49.22 alloy were studied by positron lifetime measurements. Comparing the lifetime parameters of the Nis0.78Ti49.22 alloy measured at 295 K and 225 K, it is found that the free electron density of the R phase is lower than that of the B2 phase; the open volume of the defects of the R phase is larger, while the concentration of these defects is lower than that of the B2 phase. The Nis0.78Ti49.22 alloy exhibits B19' phase at 115 K. In comparison with the R phase, the free electron density of the B19' phase increases, the open volume of the defects of the B19' phase reduces, and the concentration of these defects increases. The microdefects and the free electron density play an important role during the multi-step transformations (B2→R→B19' phase transformations) in Nis0.78Ti49.22 alloy with the decrease of temperature.

  11. Power-law tails in probability density functions of molecular cloud column density

    CERN Document Server

    Brunt, Chris

    2015-01-01

    Power-law tails are often seen in probability density functions (PDFs) of molecular cloud column densities, and have been attributed to the effect of gravity. We show that extinction PDFs of a sample of five molecular clouds obtained at a few tenths of a parsec resolution, probing extinctions up to A$_{{\\mathrm{V}}}$ $\\sim$ 10 magnitudes, are very well described by lognormal functions provided that the field selection is tightly constrained to the cold, molecular zone and that noise and foreground contamination are appropriately accounted for. In general, field selections that incorporate warm, diffuse material in addition to the cold, molecular material will display apparent core+tail PDFs. The apparent tail, however, is best understood as the high extinction part of a lognormal PDF arising from the cold, molecular part of the cloud. We also describe the effects of noise and foreground/background contamination on the PDF structure, and show that these can, if not appropriately accounted for, induce spurious ...

  12. Estimation of Extreme Response and Failure Probability of Wind Turbines under Normal Operation using Probability Density Evolution Method

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Liu, W. F.;

    2013-01-01

    Estimation of extreme response and failure probability of structures subjected to ultimate design loads is essential for structural design of wind turbines according to the new standard IEC61400-1. This task is focused on in the present paper in virtue of probability density evolution method (PDEM......), which underlies the schemes of random vibration analysis and structural reliability assessment. The short-term rare failure probability of 5-mega-watt wind turbines, for illustrative purposes, in case of given mean wind speeds and turbulence levels is investigated through the scheme of extreme value...... distribution instead of any other approximate schemes of fitted distribution currently used in statistical extrapolation techniques. Besides, the comparative studies against the classical fitted distributions and the standard Monte Carlo techniques are carried out. Numerical results indicate that PDEM exhibits...

  13. Constraints on an annihilation signal from a core of constant dark matter density around the milky way center with H.E.S.S.

    Science.gov (United States)

    Abramowski, A; Aharonian, F; Ait Benkhali, F; Akhperjanian, A G; Angüner, E O; Backes, M; Balenderan, S; Balzer, A; Barnacka, A; Becherini, Y; Becker Tjus, J; Berge, D; Bernhard, S; Bernlöhr, K; Birsin, E; Biteau, J; Böttcher, M; Boisson, C; Bolmont, J; Bordas, P; Bregeon, J; Brun, F; Brun, P; Bryan, M; Bulik, T; Carrigan, S; Casanova, S; Chadwick, P M; Chakraborty, N; Chalme-Calvet, R; Chaves, R C G; Chrétien, M; Colafrancesco, S; Cologna, G; Conrad, J; Couturier, C; Cui, Y; Davids, I D; Degrange, B; Deil, C; deWilt, P; Djannati-Ataï, A; Domainko, W; Donath, A; Drury, L O'C; Dubus, G; Dutson, K; Dyks, J; Dyrda, M; Edwards, T; Egberts, K; Eger, P; Espigat, P; Farnier, C; Fegan, S; Feinstein, F; Fernandes, M V; Fernandez, D; Fiasson, A; Fontaine, G; Förster, A; Füßling, M; Gabici, S; Gajdus, M; Gallant, Y A; Garrigoux, T; Giavitto, G; Giebels, B; Glicenstein, J F; Gottschall, D; Grondin, M-H; Grudzińska, M; Hadasch, D; Häffner, S; Hahn, J; Harris, J; Heinzelmann, G; Henri, G; Hermann, G; Hervet, O; Hillert, A; Hinton, J A; Hofmann, W; Hofverberg, P; Holler, M; Horns, D; Ivascenko, A; Jacholkowska, A; Jahn, C; Jamrozy, M; Janiak, M; Jankowsky, F; Jung-Richardt, I; Kastendieck, M A; Katarzyński, K; Katz, U; Kaufmann, S; Khélifi, B; Kieffer, M; Klepser, S; Klochkov, D; Kluźniak, W; Kolitzus, D; Komin, Nu; Kosack, K; Krakau, S; Krayzel, F; Krüger, P P; Laffon, H; Lamanna, G; Lefaucheur, J; Lefranc, V; Lemière, A; Lemoine-Goumard, M; Lenain, J-P; Lohse, T; Lopatin, A; Lu, C-C; Marandon, V; Marcowith, A; Marx, R; Maurin, G; Maxted, N; Mayer, M; McComb, T J L; Méhault, J; Meintjes, P J; Menzler, U; Meyer, M; Mitchell, A M W; Moderski, R; Mohamed, M; Morå, K; Moulin, E; Murach, T; de Naurois, M; Niemiec, J; Nolan, S J; Oakes, L; Odaka, H; Ohm, S; Opitz, B; Ostrowski, M; Oya, I; Panter, M; Parsons, R D; Paz Arribas, M; Pekeur, N W; Pelletier, G; Petrucci, P-O; Peyaud, B; Pita, S; Poon, H; Pühlhofer, G; Punch, M; Quirrenbach, A; Raab, S; Reichardt, I; Reimer, A; Reimer, O; Renaud, M; de Los Reyes, R; Rieger, F; Romoli, C; Rosier-Lees, S; Rowell, G; Rudak, B; Rulten, C B; Sahakian, V; Salek, D; Sanchez, D A; Santangelo, A; Schlickeiser, R; Schüssler, F; Schulz, A; Schwanke, U; Schwarzburg, S; Schwemmer, S; Sol, H; Spanier, F; Spengler, G; Spies, F; Stawarz, Ł; Steenkamp, R; Stegmann, C; Stinzing, F; Stycz, K; Sushch, I; Tavernet, J-P; Tavernier, T; Taylor, A M; Terrier, R; Tluczykont, M; Trichard, C; Valerius, K; van Eldik, C; van Soelen, B; Vasileiadis, G; Veh, J; Venter, C; Viana, A; Vincent, P; Vink, J; Völk, H J; Volpe, F; Vorster, M; Vuillaume, T; Wagner, S J; Wagner, P; Wagner, R M; Ward, M; Weidinger, M; Weitzel, Q; White, R; Wierzcholska, A; Willmann, P; Wörnlein, A; Wouters, D; Yang, R; Zabalza, V; Zaborov, D; Zacharias, M; Zdziarski, A A; Zech, A; Zechlin, H-S

    2015-02-27

    An annihilation signal of dark matter is searched for from the central region of the Milky Way. Data acquired in dedicated on-off observations of the Galactic center region with H.E.S.S. are analyzed for this purpose. No significant signal is found in a total of ∼9  h of on-off observations. Upper limits on the velocity averaged cross section, ⟨σv⟩, for the annihilation of dark matter particles with masses in the range of ∼300  GeV to ∼10  TeV are derived. In contrast to previous constraints derived from observations of the Galactic center region, the constraints that are derived here apply also under the assumption of a central core of constant dark matter density around the center of the Galaxy. Values of ⟨σv⟩ that are larger than 3×10^{-24}  cm^{3}/s are excluded for dark matter particles with masses between ∼1 and ∼4  TeV at 95% C.L. if the radius of the central dark matter density core does not exceed 500 pc. This is the strongest constraint that is derived on ⟨σv⟩ for annihilating TeV mass dark matter without the assumption of a centrally cusped dark matter density distribution in the search region.

  14. On the discretization of probability density functions and the continuous Rényi entropy

    Indian Academy of Sciences (India)

    Diógenes Campos

    2015-12-01

    On the basis of second mean-value theorem (SMVT) for integrals, a discretization method is proposed with the aim of representing the expectation value of a function with respect to a probability density function in terms of the discrete probability theory. This approach is applied to the continuous Rényi entropy, and it is established that a discrete probability distribution can be associated to it in a very natural way. The probability density functions for the linear superposition of two coherent states is used for developing a representative example.

  15. probably

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    【说词】1. He can probably tell us the truth.2. Will it rain this afternoong ? Probably【解语】作副词,意为“大概、或许”,表示可能性很大,通常指根据目前情况作出积极推测或判断;

  16. Superposition rule and entanglement in diagonal and probability representations of density states

    OpenAIRE

    Man'ko, Vladimir I.; Marmo, Giuseppe; Sudarshan, E C George

    2009-01-01

    The quasidistributions corresponding to the diagonal representation of quantum states are discussed within the framework of operator-symbol construction. The tomographic-probability distribution describing the quantum state in the probability representation of quantum mechanics is reviewed. The connection of the diagonal and probability representations is discussed. The superposition rule is considered in terms of the density-operator symbols. The separability and entanglement properties of m...

  17. The Influence of Phonotactic Probability and Neighborhood Density on Children's Production of Newly Learned Words

    Science.gov (United States)

    Heisler, Lori; Goffman, Lisa

    2016-01-01

    A word learning paradigm was used to teach children novel words that varied in phonotactic probability and neighborhood density. The effects of frequency and density on speech production were examined when phonetic forms were nonreferential (i.e., when no referent was attached) and when phonetic forms were referential (i.e., when a referent was…

  18. A note on the existence of transition probability densities for L\\'evy processes

    OpenAIRE

    Knopova, V.; Schilling, R.L.

    2010-01-01

    We prove several necessary and sufficient conditions for the existence of (smooth) transition probability densities for L\\'evy processes and isotropic L\\'evy processes. Under some mild conditions on the characteristic exponent we calculate the asymptotic behaviour of the transition density as $t\\to 0$ and $t\\to\\infty$ and show a ratio-limit theorem.

  19. Moment-independent importance measure of basic random variable and its probability density evolution solution

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    To analyze the effect of basic variable on failure probability in reliability analysis,a moment-independent importance measure of the basic random variable is proposed,and its properties are analyzed and verified.Based on this work,the importance measure of the basic variable on the failure probability is compared with that on the distribution density of the response.By use of the probability density evolution method,a solution is established to solve two importance measures,which can efficiently avoid the difficulty in solving the importance measures.Some numerical examples and engineering examples are used to demonstrate the proposed importance measure on the failure probability and that on the distribution density of the response.The results show that the proposed importance measure can effectively describe the effect of the basic variable on the failure probability from the distribution density of the basic variable.Additionally,the results show that the established solution on the probability density evolution is efficient for the importance measures.

  20. Joint Delay Doppler Probability Density Functions for Air-to-Air Channels

    Directory of Open Access Journals (Sweden)

    Michael Walter

    2014-01-01

    Full Text Available Recent channel measurements indicate that the wide sense stationary uncorrelated scattering assumption is not valid for air-to-air channels. Therefore, purely stochastic channel models cannot be used. In order to cope with the nonstationarity a geometric component is included. In this paper we extend a previously presented two-dimensional geometric stochastic model originally developed for vehicle-to-vehicle communication to a three-dimensional air-to-air channel model. Novel joint time-variant delay Doppler probability density functions are presented. The probability density functions are derived by using vector calculus and parametric equations of the delay ellipses. This allows us to obtain closed form mathematical expressions for the probability density functions, which can then be calculated for any delay and Doppler frequency at arbitrary times numerically.

  1. Linearized Controller Design for the Output Probability Density Functions of Non-Gaussian Stochastic Systems

    Institute of Scientific and Technical Information of China (English)

    Pousga Kabore; Husam Baki; Hong Yue; Hong Wang

    2005-01-01

    This paper presents a linearized approach for the controller design of the shape of output probability density functions for general stochastic systems. A square root approximation to an output probability density function is realized by a set of B-spline functions. This generally produces a nonlinear state space model for the weights of the B-spline approximation. A linearized model is therefore obtained and embedded into a performance function that measures the tracking error of the output probability density function with respect to a given distribution. By using this performance function as a Lyapunov function for the closed loop system, a feedback control input has been obtained which guarantees closed loop stability and realizes perfect tracking. The algorithm described in this paper has been tested on a simulated example and desired results have been achieved.

  2. Density matrix equation analysis of optical–optical double-resonance multiphoton ionization probability

    International Nuclear Information System (INIS)

    An analytical formula of the optical–optical double-resonance multi-photon ionization (OODR-MPI) probability is derived from the time-dependent density-matrix equations that describe the interaction of photon and material. Based on the formula, the variation of the multiphoton ionization (MPI) probability with laser resonance detuning, Rabi frequency, laser pulse duration and ionization rate is investigated theoretically. It is shown that the MPI probability will decrease with the increase of laser resonance detuning, to some extent, to zero. The influence of the pump laser resonance detuning on the ionization probability is more important with respect to the probe laser. It not only influences Rabi frequency for saturation, but also the saturation value of MPI probability. The MPI probability will increase with Rabi frequency, laser pulse duration and ionization rate. It is also found that though the variation of the populations in the ground, the first and the second resonance states is different at the beginning of laser radiation, but they will still decrease to zero as the time goes on. It is then that the ionization probability gets the maximum value. Thus long laser pulse duration and high laser intensity are in favor for improving the MPI probability. These theoretical research results can provide a useful guide for the practical application of OODR-MPI spectroscopy. - Highlights: • An analytical expression of OODR-MPI probability has been derived. • MPI probability decreases with the increase of laser resonance detuning. • The influence of pump laser on the MPI probability is larger than probe laser. • Larger laser pulse duration and intensity are in favor of higher MPI probability

  3. Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function

    Science.gov (United States)

    Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.

    2011-01-01

    In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.

  4. A new formulation of the probability density function in random walk models for atmospheric dispersion

    DEFF Research Database (Denmark)

    Falk, Anne Katrine Vinther; Gryning, Sven-Erik

    1997-01-01

    In this model for atmospheric dispersion particles are simulated by the Langevin Equation, which is a stochastic differential equation. It uses the probability density function (PDF) of the vertical velocity fluctuations as input. The PDF is constructed as an expansion after Hermite polynomials...

  5. Influence of Phonotactic Probability/Neighbourhood Density on Lexical Learning in Late Talkers

    Science.gov (United States)

    MacRoy-Higgins, Michelle; Schwartz, Richard G.; Shafer, Valerie L.; Marton, Klara

    2013-01-01

    Background: Toddlers who are late talkers demonstrate delays in phonological and lexical skills. However, the influence of phonological factors on lexical acquisition in toddlers who are late talkers has not been examined directly. Aims: To examine the influence of phonotactic probability/neighbourhood density on word learning in toddlers who were…

  6. Compound kernel estimates for the transition probability density of a L\\'evy process in $\\rn$

    OpenAIRE

    Knopova, V.

    2013-01-01

    We construct in the small-time setting the upper and lower estimates for the transition probability density of a L\\'evy process in $\\rn$. Our approach relies on the complex analysis technique and the asymptotic analysis of the inverse Fourier transform of the characteristic function of the respective process.

  7. Use of ELVIS II platform for random process modelling and analysis of its probability density function

    Science.gov (United States)

    Maslennikova, Yu. S.; Nugmanov, I. S.

    2016-08-01

    The problem of probability density function estimation for a random process is one of the most common in practice. There are several methods to solve this problem. Presented laboratory work uses methods of the mathematical statistics to detect patterns in the realization of random process. On the basis of ergodic theory, we construct algorithm for estimating univariate probability density distribution function for a random process. Correlational analysis of realizations is applied to estimate the necessary size of the sample and the time of observation. Hypothesis testing for two probability distributions (normal and Cauchy) is used on the experimental data, using χ2 criterion. To facilitate understanding and clarity of the problem solved, we use ELVIS II platform and LabVIEW software package that allows us to make the necessary calculations, display results of the experiment and, most importantly, to control the experiment. At the same time students are introduced to a LabVIEW software package and its capabilities.

  8. Kernel density estimation and marginalized-particle based probability hypothesis density filter for multi-target tracking

    Institute of Scientific and Technical Information of China (English)

    张路平; 王鲁平; 李飚; 赵明

    2015-01-01

    In order to improve the performance of the probability hypothesis density (PHD) algorithm based particle filter (PF) in terms of number estimation and states extraction of multiple targets, a new probability hypothesis density filter algorithm based on marginalized particle and kernel density estimation is proposed, which utilizes the idea of marginalized particle filter to enhance the estimating performance of the PHD. The state variables are decomposed into linear and non-linear parts. The particle filter is adopted to predict and estimate the nonlinear states of multi-target after dimensionality reduction, while the Kalman filter is applied to estimate the linear parts under linear Gaussian condition. Embedding the information of the linear states into the estimated nonlinear states helps to reduce the estimating variance and improve the accuracy of target number estimation. The meanshift kernel density estimation, being of the inherent nature of searching peak value via an adaptive gradient ascent iteration, is introduced to cluster particles and extract target states, which is independent of the target number and can converge to the local peak position of the PHD distribution while avoiding the errors due to the inaccuracy in modeling and parameters estimation. Experiments show that the proposed algorithm can obtain higher tracking accuracy when using fewer sampling particles and is of lower computational complexity compared with the PF-PHD.

  9. The probability density function of the total departure from nucleate boiling ratio

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, R.C.; Harrell, J.R.; Erb, J.O. (Virginia Power, Glen Allen (USA))

    1989-11-01

    The use of statistical methods for treating independent uncertainties in setting a departure from nucleate boiling ratio (DNBR) limit has transformed this process. The DNBR limit must provide protection from DNB with 95% probability at a 95% confidence level; thus, the limit is also known as the 95/95 limit. In establishing this statistical limit, assumptions must be made concerning the DNBR's uncertainty probability density function (pdf). In this paper, the form of the DNBR uncertainty pdf is investigated, and the implications for the vendor methodologies are addressed.

  10. Steady-state probability density function in wave turbulence under large volume limit

    Institute of Scientific and Technical Information of China (English)

    Yeontaek Choia; Sang Gyu Job

    2011-01-01

    We investigate the possibility for two-mode probability density function (PDF) to have a non-zero flux steady state solution. We take the large volume limit so that the space of modes becomes continuous. It is shown that in this limit all the steady-state two- or higher-mode PDFs are the product of one-mode PDFs. The flux of this steady-state solution turns out to be zero for any finite mode PDF.

  11. Energy Quantization and Probability Density of Electron in Intense-Field-Atom Interactions

    Institute of Scientific and Technical Information of China (English)

    敖淑艳; 程太旺; 李晓峰; 吴令安; 付盘铭

    2003-01-01

    We find that, due to the quantum correlation between the electron and the field, the electronic energy becomes quantized also, manifesting the particle aspect of light in the electron-light interaction. The probability amplitude of finding electron with a given energy is given by a generalized Bessel function, which can be represented as a coherent superposition of contributions from a few electronic quantum trajectories. This concept is illustrated by comparing the spectral density of the electron with the laser assisted recombination spectrum.

  12. Analytical formulation of the single-visit completeness joint probability density function

    CERN Document Server

    Garrett, Daniel

    2016-01-01

    We derive an exact formulation of the multivariate integral representing the single-visit obscurational and photometric completeness joint probability density function for arbitrary distributions for planetary parameters. We present a derivation of the region of nonzero values of this function which extends previous work, and discuss time and computational complexity costs and benefits of the method. We present a working implementation, and demonstrate excellent agreement between this approach and Monte Carlo simulation results

  13. Probability Density Components Analysis: A New Approach to Treatment and Classification of SAR Images

    OpenAIRE

    Osmar Abílio de Carvalho Júnior; Luz Marilda de Moraes Maciel; Ana Paula Ferreira de Carvalho; Renato Fontes Guimarães; Cristiano Rosa Silva; Roberto Arnaldo Trancoso Gomes; Nilton Correia Silva

    2014-01-01

    Speckle noise (salt and pepper) is inherent to synthetic aperture radar (SAR), which causes a usual noise-like granular aspect and complicates the image classification. In SAR image analysis, the spatial information might be a particular benefit for denoising and mapping classes characterized by a statistical distribution of the pixel intensities from a complex and heterogeneous spectral response. This paper proposes the Probability Density Components Analysis (PDCA), a new alternative that c...

  14. Analytical Formulation of the Single-visit Completeness Joint Probability Density Function

    Science.gov (United States)

    Garrett, Daniel; Savransky, Dmitry

    2016-09-01

    We derive an exact formulation of the multivariate integral representing the single-visit obscurational and photometric completeness joint probability density function for arbitrary distributions for planetary parameters. We present a derivation of the region of nonzero values of this function, which extends previous work, and discuss the time and computational complexity costs and benefits of the method. We present a working implementation and demonstrate excellent agreement between this approach and Monte Carlo simulation results.

  15. A probability density function of liftoff velocities in mixed-size wind sand flux

    Institute of Scientific and Technical Information of China (English)

    ZHENG XiaoJing; ZHU Wei; XIE Li

    2008-01-01

    With the discrete element method (DEM), employing the diameter distribution of natural sands sampled from the Tengger Desert, a mixed-size sand bed was pro-duced and the particle-bed collision was simulated in the mixed-size wind sand movement. In the simulation, the shear wind velocity, particle diameter, incident velocity and incident angle of the impact sand particle were given the same values as the experimental results. After the particle-bed collision, we collected all the ini-tial velocities of rising sand particles, including the liftoff angular velocities, liftoff linear velocities and their horizontal and vertical components. By the statistical analysis on the velocity sample for each velocity component, its probability density functions were obtained, and they are the functions of the shear wind velocity. The liftoff velocities and their horizontal and vertical components are distributed as an exponential density function, while the angular velocities are distributed as a nor-mal density function.

  16. On the reliability of observational measurements of column density probability distribution functions

    CERN Document Server

    Ossenkopf, Volker; Schneider, Nicola; Federrath, Christoph; Klessen, Ralf S

    2016-01-01

    Probability distribution functions (PDFs) of column densities are an established tool to characterize the evolutionary state of interstellar clouds. Using simulations, we show to what degree their determination is affected by noise, line-of-sight contamination, field selection, and the incomplete sampling in interferometric measurements. We solve the integrals that describe the convolution of a cloud PDF with contaminating sources and study the impact of missing information on the measured column density PDF. The effect of observational noise can be easily estimated and corrected for if the root mean square (rms) of the noise is known. For $\\sigma_{noise}$ values below 40\\,\\% of the typical cloud column density, $N_{peak}$, this involves almost no degradation of the accuracy of the PDF parameters. For higher noise levels and narrow cloud PDFs the width of the PDF becomes increasingly uncertain. A contamination by turbulent foreground or background clouds can be removed as a constant shield if the PDF of the c...

  17. Comparative assessment of surface fluxes from different sources using probability density distributions

    Science.gov (United States)

    Gulev, Sergey; Tilinina, Natalia; Belyaev, Konstantin

    2015-04-01

    Surface turbulent heat fluxes from modern era and first generation reanalyses (NCEP-DOE, ERA-Interim, MERRA NCEP-CFSR, JRA) as well as from satellite products (SEAFLUX, IFREMER, HOAPS) were intercompared using framework of probability distributions for sensible and latent heat fluxes. For approximation of probability distributions and estimation of extreme flux values Modified Fisher-Tippett (MFT) distribution has been used. Besides mean flux values, consideration is given to the comparative analysis of (i) parameters of the MFT probability density functions (scale and location), (ii) extreme flux values corresponding high order percentiles of fluxes (e.g. 99th and higher) and (iii) fractional contribution of extreme surface flux events in the total surface turbulent fluxes integrated over months and seasons. The latter was estimated using both fractional distribution derived from MFT and empirical estimates based upon occurrence histograms. The strongest differences in the parameters of probability distributions of surface fluxes and extreme surface flux values between different reanalyses are found in the western boundary current extension regions and high latitudes, while the highest differences in the fractional contributions of surface fluxes may occur in mid ocean regions being closely associated with atmospheric synoptic dynamics. Generally, satellite surface flux products demonstrate relatively stronger extreme fluxes compared to reanalyses, even in the Northern Hemisphere midlatitudes where data assimilation input in reanalyses is quite dense compared to the Southern Ocean regions.

  18. The effects of urbanization on population density, occupancy, and detection probability of wild felids.

    Science.gov (United States)

    Lewis, Jesse S; Logan, Kenneth A; Alldredge, Mat W; Bailey, Larissa L; VandeWoude, Sue; Crooks, Kevin R

    2015-10-01

    Urbanization is a primary driver of landscape conversion, with far-reaching effects on landscape pattern and process, particularly related to the population characteristics of animals. Urbanization can alter animal movement and habitat quality, both of which can influence population abundance and persistence. We evaluated three important population characteristics (population density, site occupancy, and species detection probability) of a medium-sized and a large carnivore across varying levels of urbanization. Specifically, we studied bobcat and puma populations across wildland, exurban development, and wildland-urban interface (WUI) sampling grids to test hypotheses evaluating how urbanization affects wild felid populations and their prey. Exurban development appeared to have a greater impact on felid populations than did habitat adjacent to a major urban area (i.e., WUI); estimates of population density for both bobcats and pumas were lower in areas of exurban development compared to wildland areas, whereas population density was similar between WUI and wildland habitat. Bobcats and pumas were less likely to be detected in habitat as the amount of human disturbance associated with residential development increased at a site, which was potentially related to reduced habitat quality resulting from urbanization. However, occupancy of both felids was similar between grids in both study areas, indicating that this population metric was less sensitive than density. At the scale of the sampling grid, detection probability for bobcats in urbanized habitat was greater than in wildland areas, potentially due to restrictive movement corridors and funneling of animal movements in landscapes influenced by urbanization. Occupancy of important felid prey (cottontail rabbits and mule deer) was similar across levels of urbanization, although elk occupancy was lower in urbanized areas. Our study indicates that the conservation of medium- and large-sized felids associated with

  19. Protein distance constraints predicted by neural networks and probability density functions

    DEFF Research Database (Denmark)

    Lund, Ole; Frimand, Kenneth; Gorodkin, Jan;

    1997-01-01

    We predict interatomic C-α distances by two independent data driven methods. The first method uses statistically derived probability distributions of the pairwise distance between two amino acids, whilst the latter method consists of a neural network prediction approach equipped with windows taking....... The predictions are based on a data set derived using a new threshold similarity. We show that distances in proteins are predicted more accurately by neural networks than by probability density functions. We show that the accuracy of the predictions can be further increased by using sequence profiles. A threading...... the context of the two residues into account. These two methods are used to predict whether distances in independent test sets were above or below given thresholds. We investigate which distance thresholds produce the most information-rich constraints and, in turn, the optimal performance of the two methods...

  20. The Effect of Incremental Changes in Phonotactic Probability and Neighborhood Density on Word Learning by Preschool Children

    Science.gov (United States)

    Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon

    2013-01-01

    Purpose: Phonotactic probability or neighborhood density has predominately been defined through the use of gross distinctions (i.e., low vs. high). In the current studies, the authors examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method: The authors examined the full range of…

  1. Nonparametric maximum likelihood estimation of probability densities by penalty function methods

    Science.gov (United States)

    Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.

    1974-01-01

    When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.

  2. Spectral discrete probability density function of measured wind turbine noise in the far field

    Directory of Open Access Journals (Sweden)

    Payam eAshtiani

    2015-04-01

    Full Text Available Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for 1/3rd Octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low frequency noise sources.

  3. Spectral discrete probability density function of measured wind turbine noise in the far field.

    Science.gov (United States)

    Ashtiani, Payam; Denison, Adelaide

    2015-01-01

    Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources. PMID:25905097

  4. Breather turbulence versus soliton turbulence: Rogue waves, probability density functions, and spectral features.

    Science.gov (United States)

    Akhmediev, N; Soto-Crespo, J M; Devine, N

    2016-08-01

    Turbulence in integrable systems exhibits a noticeable scientific advantage: it can be expressed in terms of the nonlinear modes of these systems. Whether the majority of the excitations in the system are breathers or solitons defines the properties of the turbulent state. In the two extreme cases we can call such states "breather turbulence" or "soliton turbulence." The number of rogue waves, the probability density functions of the chaotic wave fields, and their physical spectra are all specific for each of these two situations. Understanding these extreme cases also helps in studies of mixed turbulent states when the wave field contains both solitons and breathers, thus revealing intermediate characteristics. PMID:27627303

  5. Probability density function formalism for optical coherence tomography signal analysis: a controlled phantom study.

    Science.gov (United States)

    Weatherbee, Andrew; Sugita, Mitsuro; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex

    2016-06-15

    The distribution of backscattered intensities as described by the probability density function (PDF) of tissue-scattered light contains information that may be useful for tissue assessment and diagnosis, including characterization of its pathology. In this Letter, we examine the PDF description of the light scattering statistics in a well characterized tissue-like particulate medium using optical coherence tomography (OCT). It is shown that for low scatterer density, the governing statistics depart considerably from a Gaussian description and follow the K distribution for both OCT amplitude and intensity. The PDF formalism is shown to be independent of the scatterer flow conditions; this is expected from theory, and suggests robustness and motion independence of the OCT amplitude (and OCT intensity) PDF metrics in the context of potential biomedical applications. PMID:27304274

  6. Probability density function formalism for optical coherence tomography signal analysis: a controlled phantom study.

    Science.gov (United States)

    Weatherbee, Andrew; Sugita, Mitsuro; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex

    2016-06-15

    The distribution of backscattered intensities as described by the probability density function (PDF) of tissue-scattered light contains information that may be useful for tissue assessment and diagnosis, including characterization of its pathology. In this Letter, we examine the PDF description of the light scattering statistics in a well characterized tissue-like particulate medium using optical coherence tomography (OCT). It is shown that for low scatterer density, the governing statistics depart considerably from a Gaussian description and follow the K distribution for both OCT amplitude and intensity. The PDF formalism is shown to be independent of the scatterer flow conditions; this is expected from theory, and suggests robustness and motion independence of the OCT amplitude (and OCT intensity) PDF metrics in the context of potential biomedical applications.

  7. Understanding star formation in molecular clouds I. A universal probability distribution of column densities ?

    CERN Document Server

    Schneider, N; Csengeri, T; Klessen, R; Federrath, C; Tremblin, P; Girichidis, P; Bontemps, S; Andre, Ph

    2014-01-01

    Column density maps of molecular clouds are one of the most important observables in the context of molecular cloud- and star-formation (SF) studies. With Herschel it is now possible to reveal rather precisely the column density of dust, which is the best tracer of the bulk of material in molecular clouds. However, line-of-sight (LOS) contamination from fore- or background clouds can lead to an overestimation of the dust emission of molecular clouds, in particular for distant clouds. This implies too high values for column density and mass, and a misleading interpretation of probability distribution functions (PDFs) of the column density. In this paper, we demonstrate by using observations and simulations how LOS contamination affects the PDF. We apply a first-order approximation (removing a constant level) to the molecular clouds of Auriga and Maddalena (low-mass star-forming), and Carina and NGC3603(both high-mass SF regions). In perfect agreement with the simulations, we find that the PDFs become broader, ...

  8. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    Science.gov (United States)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  9. Analysis of Observation Data of Earth-Rockfill Dam Based on Cloud Probability Distribution Density Algorithm

    Directory of Open Access Journals (Sweden)

    Han Liwei

    2014-07-01

    Full Text Available Monitoring data on an earth-rockfill dam constitutes a form of spatial data. Such data include much uncertainty owing to the limitation of measurement information, material parameters, load, geometry size, initial conditions, boundary conditions and the calculation model. So the cloud probability density of the monitoring data must be addressed. In this paper, the cloud theory model was used to address the uncertainty transition between the qualitative concept and the quantitative description. Then an improved algorithm of cloud probability distribution density based on a backward cloud generator was proposed. This was used to effectively convert certain parcels of accurate data into concepts which can be described by proper qualitative linguistic values. Such qualitative description was addressed as cloud numerical characteristics-- {Ex, En, He}, which could represent the characteristics of all cloud drops. The algorithm was then applied to analyze the observation data of a piezometric tube in an earth-rockfill dam. And experiment results proved that the proposed algorithm was feasible, through which, we could reveal the changing regularity of piezometric tube’s water level. And the damage of the seepage in the body was able to be found out.

  10. Firing statistics of inhibitory neuron with delayed feedback. I. Output ISI probability density.

    Science.gov (United States)

    Vidybida, A K; Kravchuk, K G

    2013-06-01

    Activity of inhibitory neuron with delayed feedback is considered in the framework of point stochastic processes. The neuron receives excitatory input impulses from a Poisson stream, and inhibitory impulses from the feedback line with a delay. We investigate here, how does the presence of inhibitory feedback affect the output firing statistics. Using binding neuron (BN) as a model, we derive analytically the exact expressions for the output interspike intervals (ISI) probability density, mean output ISI and coefficient of variation as functions of model's parameters for the case of threshold 2. Using the leaky integrate-and-fire (LIF) model, as well as the BN model with higher thresholds, these statistical quantities are found numerically. In contrast to the previously studied situation of no feedback, the ISI probability densities found here both for BN and LIF neuron become bimodal and have discontinuity of jump type. Nevertheless, the presence of inhibitory delayed feedback was not found to affect substantially the output ISI coefficient of variation. The ISI coefficient of variation found ranges between 0.5 and 1. It is concluded that introduction of delayed inhibitory feedback can radically change neuronal output firing statistics. This statistics is as well distinct from what was found previously (Vidybida and Kravchuk, 2009) by a similar method for excitatory neuron with delayed feedback.

  11. Construction of Coarse-Grained Models by Reproducing Equilibrium Probability Density Function

    Science.gov (United States)

    Lu, Shi-Jing; Zhou, Xin

    2015-01-01

    The present work proposes a novel methodology for constructing coarse-grained (CG) models, which aims at minimizing the difference between CG model and the corresponding original system. The difference is defined as a functional of their equilibrium conformational probability densities, then is estimated from equilibrium averages of many independent physical quantities denoted as basis functions. An orthonormalization strategy is adopted to get the independent basis functions from sufficiently preselected interesting physical quantities of the system. Thus the current method is named as probability density matching coarse-graining (PMCG) scheme, which effectively takes into account the overall characteristics of the original systems to construct CG model, and it is a natural improvement of the usual CG scheme wherein some physical quantities are intuitively chosen without considering their correlations. We verify the general PMCG framework in constructing a one-site CG water model from TIP3P model. Both structure of liquids and pressure of the TIP3P water system are found to be well reproduced at the same time in the constructed CG model.

  12. Joint probability density function of the stochastic responses of nonlinear structures

    Institute of Scientific and Technical Information of China (English)

    Chen Jianbing; Li Jie

    2007-01-01

    The joint probability density function (PDF) of different structural responses is a very important topic in the stochastic response analysis of nonlinear structures. In this paper, the probability density evolution method, which is successfully developed to capture the instantaneous PDF of an arbitrary single response of interest, is extended to evaluate the joint PDF of any two responses. A two-dimensional partial differential equation in terms of the joint PDF is established.The strategy of selecting representative points via the number theoretical method and sieved by a hyper-ellipsoid is outlined.A two-dimensional difference scheme is developed. The free vibration of an SDOF system is examined to verify the proposed method, and a frame structure exhibiting hysteresis subjected to stochastic ground motion is investigated. It is pointed out that the correlation of different responses results from the fact that randomness of different responses comes from the same set of basic random parameters involved. In other words, the essence of the probabilistic correlation is a physical correlation.

  13. Using Prediction Markets to Generate Probability Density Functions for Climate Change Risk Assessment

    Science.gov (United States)

    Boslough, M.

    2011-12-01

    Climate-related uncertainty is traditionally presented as an error bar, but it is becoming increasingly common to express it in terms of a probability density function (PDF). PDFs are a necessary component of probabilistic risk assessments, for which simple "best estimate" values are insufficient. Many groups have generated PDFs for climate sensitivity using a variety of methods. These PDFs are broadly consistent, but vary significantly in their details. One axiom of the verification and validation community is, "codes don't make predictions, people make predictions." This is a statement of the fact that subject domain experts generate results using assumptions within a range of epistemic uncertainty and interpret them according to their expert opinion. Different experts with different methods will arrive at different PDFs. For effective decision support, a single consensus PDF would be useful. We suggest that market methods can be used to aggregate an ensemble of opinions into a single distribution that expresses the consensus. Prediction markets have been shown to be highly successful at forecasting the outcome of events ranging from elections to box office returns. In prediction markets, traders can take a position on whether some future event will or will not occur. These positions are expressed as contracts that are traded in a double-action market that aggregates price, which can be interpreted as a consensus probability that the event will take place. Since climate sensitivity cannot directly be measured, it cannot be predicted. However, the changes in global mean surface temperature are a direct consequence of climate sensitivity, changes in forcing, and internal variability. Viable prediction markets require an undisputed event outcome on a specific date. Climate-related markets exist on Intrade.com, an online trading exchange. One such contract is titled "Global Temperature Anomaly for Dec 2011 to be greater than 0.65 Degrees C." Settlement is based

  14. Estimates of density, detection probability, and factors influencing detection of burrowing owls in the Mojave Desert

    Science.gov (United States)

    Crowe, D.E.; Longshore, K.M.

    2010-01-01

    We estimated relative abundance and density of Western Burrowing Owls (Athene cunicularia hypugaea) at two sites in the Mojave Desert (200304). We made modifications to previously established Burrowing Owl survey techniques for use in desert shrublands and evaluated several factors that might influence the detection of owls. We tested the effectiveness of the call-broadcast technique for surveying this species, the efficiency of this technique at early and late breeding stages, and the effectiveness of various numbers of vocalization intervals during broadcasting sessions. Only 1 (3) of 31 initial (new) owl responses was detected during passive-listening sessions. We found that surveying early in the nesting season was more likely to produce new owl detections compared to surveying later in the nesting season. New owls detected during each of the three vocalization intervals (each consisting of 30 sec of vocalizations followed by 30 sec of silence) of our broadcasting session were similar (37, 40, and 23; n 30). We used a combination of detection trials (sighting probability) and double-observer method to estimate the components of detection probability, i.e., availability and perception. Availability for all sites and years, as determined by detection trials, ranged from 46.158.2. Relative abundance, measured as frequency of occurrence and defined as the proportion of surveys with at least one owl, ranged from 19.232.0 for both sites and years. Density at our eastern Mojave Desert site was estimated at 0.09 ?? 0.01 (SE) owl territories/km2 and 0.16 ?? 0.02 (SE) owl territories/km2 during 2003 and 2004, respectively. In our southern Mojave Desert site, density estimates were 0.09 ?? 0.02 (SE) owl territories/km2 and 0.08 ?? 0.02 (SE) owl territories/km 2 during 2004 and 2005, respectively. ?? 2010 The Raptor Research Foundation, Inc.

  15. Momentum Probabilities for a Single Quantum Particle in Three-Dimensional Regular "Infinite" Wells: One Way of Promoting Understanding of Probability Densities

    Science.gov (United States)

    Riggs, Peter J.

    2013-01-01

    Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…

  16. Defects and Electron Densities in TiAl-based Alloys Containing Mn and Cu Studied by Positron Annihilation

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The defects and electron densities in Ti50Al50, Ti50Al48Mn2 and Ti50Al48Cu2 alloys have been studied by positron lifetime measurements. The results show that the free electron density in the bulk of binary TiAl alloy is lower than that of pure Ti or Al metal. The open volume of defects on the grain boundaries of binary TiAl alloy is larger than that of a monovacancy of Al metal. The additions of Mn and Cu into Ti-rich TiAl alloy will increase the free electron densities in the bulk and the grain boundary simultaneously, since one Mn atom or Cu atom which occupies the Al atom site provides more free electrons participating metallic bonds than those provided by an Al atom. It is also found the free electron density in the grain boundary of Ti50Al48Cu2 is higher than that of Ti50Al48Mn2 alloy, while the free electron density in the bulk of Ti50Al48Cu2 is lower than that of Ti50Al48Mn2 alloy. The behaviors of Mn and Cu atoms in TiAl alloy have been discussed.

  17. Dark matter density profiles of the halos embedding early-type galaxies: characterizing halo contraction and dark matter annihilation strength

    CERN Document Server

    Chae, Kyu-Hyun; Frieman, Joshua A; Bernardi, Mariangela

    2012-01-01

    Identifying dark matter and characterizing its distribution in the inner region of halos embedding galaxies are inter-related problems of broad importance. We devise a new procedure of determining dark matter distribution in halos. We first make a self-consistent bivariate statistical match of stellar mass and velocity dispersion with halo mass as demonstrated here for the first time. Then, selecting early-type galaxy-halo systems we perform Jeans dynamical modeling with the aid of observed statistical properties of stellar mass profiles and velocity dispersion profiles. Dark matter density profiles derived specifically using Sloan Digital Sky Survey galaxies and halos from up-to-date cosmological dissipationless simulations deviate significantly from the dissipationless profle of Navarro-Frenk-White or Einasto in terms of inner density slope and/or concentration. From these dark matter profiles we find that dark matter density is enhanced in the inner region of most early-type galactic halos providing an ind...

  18. Multiple-streaming and the Probability Distribution of Density in Redshift Space

    CERN Document Server

    Hui, L; Shandarin, S F; Hui, Lam; Kofman, Lev; Shandarin, Sergei F.

    1999-01-01

    We examine several aspects of redshift distortions by expressing the redshift-space density in terms of the eigenvalues and orientation of the local Lagrangian deformation tensor. We explore the importance of multiple-streaming using the Zel'dovich approximation (ZA), and compute the average number of streams in real and redshift-space. It is found that multiple-streaming can be significant in redshift-space but negligible in real-space, even at moderate values of the linear fluctuation amplitude ($\\sigma < 1$). Moreover, unlike their real-space counter-parts, redshift-space multiple-streams can flow past each other with minimal interactions. Such nonlinear redshift-space effects, which operate even when the real-space density field is quite linear, could suppress the classic compression of redshift-structures predicted by linear theory (Kaiser 1987). We also compute using the ZA the probability distribution function (PDF) of density, as well as $S_3$, in real and redshift-space, and compare it with the PD...

  19. A probability density function of liftoff velocities in mixed-size wind sand flux

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    With the discrete element method(DEM) ,employing the diameter distribution of natural sands sampled from the Tengger Desert,a mixed-size sand bed was produced and the particle-bed collision was simulated in the mixed-size wind sand movement. In the simulation,the shear wind velocity,particle diameter,incident velocity and incident angle of the impact sand particle were given the same values as the experimental results. After the particle-bed collision,we collected all the initial velocities of rising sand particles,including the liftoff angular velocities,liftoff linear velocities and their horizontal and vertical components. By the statistical analysis on the velocity sample for each velocity component,its probability density functions were obtained,and they are the functions of the shear wind velocity. The liftoff velocities and their horizontal and vertical components are distributed as an exponential density function,while the angular velocities are distributed as a normal density function.

  20. Occupation probabilities and current densities of bulk and edge states of a Floquet topological insulator

    Science.gov (United States)

    Dehghani, Hossein; Mitra, Aditi

    2016-05-01

    Results are presented for the occupation probabilities and current densities of bulk and edge states of half-filled graphene in a cylindrical geometry and irradiated by a circularly polarized laser. It is assumed that the system is closed and that the laser has been switched on as a quench. Laser parameters corresponding to some representative topological phases are studied: one where the Chern number of the Floquet bands equals the number of chiral edge modes, a second where anomalous edge states appear in the Floquet Brillouin zone boundaries, and a third where the Chern number is zero, yet topological edge states appear at the center and boundaries of the Floquet Brillouin zone. Qualitative differences are found for the high-frequency off-resonant and low-frequency on-resonant laser with edge states arising due to resonant processes occupied with a high effective temperature on the one hand, while edge states arising due to off-resonant processes occupied with a low effective temperature on the other. For an ideal half-filled system where only one of the bands in the Floquet Brillouin zone is occupied and the other empty, particle-hole and inversion symmetry of the Floquet Hamiltonian implies zero current density. However the laser switch-on protocol breaks the inversion symmetry, resulting in a net cylindrical sheet of current density at steady state. Due to the underlying chirality of the system, this current density profile is associated with a net charge imbalance between the top and bottom of the cylinders.

  1. Evaluate the Word Error Rate of Binary Block Codes with Square Radius Probability Density Function

    CERN Document Server

    Chen, Xiaogang; Gu, Jian; Yang, Hongkui

    2007-01-01

    The word error rate (WER) of soft-decision-decoded binary block codes rarely has closed-form. Bounding techniques are widely used to evaluate the performance of maximum-likelihood decoding algorithm. But the existing bounds are not tight enough especially for low signal-to-noise ratios and become looser when a suboptimum decoding algorithm is used. This paper proposes a new concept named square radius probability density function (SR-PDF) of decision region to evaluate the WER. Based on the SR-PDF, The WER of binary block codes can be calculated precisely for ML and suboptimum decoders. Furthermore, for a long binary block code, SR-PDF can be approximated by Gamma distribution with only two parameters that can be measured easily. Using this property, two closed-form approximative expressions are proposed which are very close to the simulation results of the WER of interesting.

  2. Analytical computation of the magnetization probability density function for the harmonic 2D XY model

    CERN Document Server

    Palma, G

    2009-01-01

    The probability density function (PDF) of some global average quantity plays a fundamental role in critical and highly correlated systems. We explicitly compute this quantity as a function of the magnetization for the two dimensional XY model in its harmonic approximation. Numerical simulations and perturbative results have shown a Gumbel-like shape of the PDF, in spite of the fact that the average magnetization is not an extreme variable. Our analytical result allows to test both perturbative analytical expansions and also numerical computations performed previously. Perfect agreement is found for the first moments of the PDF. Also for large volume and in the high temperature limit the distribution becomes Gaussian, as it should be. In the low temperature regime its numerical evaluation is compatible with a Gumbel distribution.

  3. Particle filters for probability hypothesis density filter with the presence of unknown measurement noise covariance

    Institute of Scientific and Technical Information of China (English)

    Wu Xinhui; Huang Gaoming; Gao Jun

    2013-01-01

    In Bayesian multi-target filtering, knowledge of measurement noise variance is very important. Significant mismatches in noise parameters will result in biased estimates. In this paper, a new particle filter for a probability hypothesis density (PHD) filter handling unknown measure-ment noise variances is proposed. The approach is based on marginalizing the unknown parameters out of the posterior distribution by using variational Bayesian (VB) methods. Moreover, the sequential Monte Carlo method is used to approximate the posterior intensity considering non-lin-ear and non-Gaussian conditions. Unlike other particle filters for this challenging class of PHD fil-ters, the proposed method can adaptively learn the unknown and time-varying noise variances while filtering. Simulation results show that the proposed method improves estimation accuracy in terms of both the number of targets and their states.

  4. ANNz2 - Photometric redshift and probability density function estimation using machine learning methods

    CERN Document Server

    Sadeh, Iftach; Lahav, Ofer

    2015-01-01

    We present ANNz2, a new implementation of the public software for photometric redshift (photo-z) estimation of Collister and Lahav (2004). Large photometric galaxy surveys are important for cosmological studies, and in particular for characterizing the nature of dark energy. The success of such surveys greatly depends on the ability to measure photo-zs, based on limited spectral data. ANNz2 utilizes multiple machine learning methods, such as artificial neural networks, boosted decision/regression trees and k-nearest neighbours. The objective of the algorithm is to dynamically optimize the performance of the photo-z estimation, and to properly derive the associated uncertainties. In addition to single-value solutions, the new code also generates full probability density functions (PDFs) in two different ways. In addition, estimators are incorporated to mitigate possible problems of spectroscopic training samples which are not representative or are incomplete. ANNz2 is also adapted to provide optimized solution...

  5. Calculation of probability density functions for temperature and precipitation change under global warming

    International Nuclear Information System (INIS)

    Full text: he IPCC Fourth Assessment Report (Meehl ef al. 2007) presents multi-model means of the CMIP3 simulations as projections of the global climate change over the 21st century under several SRES emission scenarios. To assess the possible range of change for Australia based on the CMIP3 ensemble, we can follow Whetton etal. (2005) and use the 'pattern scaling' approach, which separates the uncertainty in the global mean warming from that in the local change per degree of warming. This study presents several ways of representing these two factors as probability density functions (PDFs). The beta distribution, a smooth, bounded, function allowing skewness, is found to provide a useful representation of the range of CMIP3 results. A weighting of models based on their skill in simulating seasonal means in the present climate over Australia is included. Dessai ef al. (2005) and others have used Monte-Carlo sampling to recombine such global warming and scaled change factors into values of net change. Here, we use a direct integration of the product across the joint probability space defined by the two PDFs. The result is a cumulative distribution function (CDF) for change, for each variable, location, and season. The median of this distribution provides a best estimate of change, while the 10th and 90th percentiles represent a likely range. The probability of exceeding a specified threshold can also be extracted from the CDF. The presentation focuses on changes in Australian temperature and precipitation at 2070 under the A1B scenario. However, the assumption of linearity behind pattern scaling allows results for different scenarios and times to be simply obtained. In the case of precipitation, which must remain non-negative, a simple modification of the calculations (based on decreases being exponential with warming) is used to avoid unrealistic results. These approaches are currently being used for the new CSIRO/ Bureau of Meteorology climate projections

  6. Probability Density Components Analysis: A New Approach to Treatment and Classification of SAR Images

    Directory of Open Access Journals (Sweden)

    Osmar Abílio de Carvalho Júnior

    2014-04-01

    Full Text Available Speckle noise (salt and pepper is inherent to synthetic aperture radar (SAR, which causes a usual noise-like granular aspect and complicates the image classification. In SAR image analysis, the spatial information might be a particular benefit for denoising and mapping classes characterized by a statistical distribution of the pixel intensities from a complex and heterogeneous spectral response. This paper proposes the Probability Density Components Analysis (PDCA, a new alternative that combines filtering and frequency histogram to improve the classification procedure for the single-channel synthetic aperture radar (SAR images. This method was tested on L-band SAR data from the Advanced Land Observation System (ALOS Phased-Array Synthetic-Aperture Radar (PALSAR sensor. The study area is localized in the Brazilian Amazon rainforest, northern Rondônia State (municipality of Candeias do Jamari, containing forest and land use patterns. The proposed algorithm uses a moving window over the image, estimating the probability density curve in different image components. Therefore, a single input image generates an output with multi-components. Initially the multi-components should be treated by noise-reduction methods, such as maximum noise fraction (MNF or noise-adjusted principal components (NAPCs. Both methods enable reducing noise as well as the ordering of multi-component data in terms of the image quality. In this paper, the NAPC applied to multi-components provided large reductions in the noise levels, and the color composites considering the first NAPC enhance the classification of different surface features. In the spectral classification, the Spectral Correlation Mapper and Minimum Distance were used. The results obtained presented as similar to the visual interpretation of optical images from TM-Landsat and Google Maps.

  7. Development and evaluation of probability density functions for a set of human exposure factors

    Energy Technology Data Exchange (ETDEWEB)

    Maddalena, R.L.; McKone, T.E.; Bodnar, A.; Jacobson, J.

    1999-06-01

    The purpose of this report is to describe efforts carried out during 1998 and 1999 at the Lawrence Berkeley National Laboratory to assist the U.S. EPA in developing and ranking the robustness of a set of default probability distributions for exposure assessment factors. Among the current needs of the exposure-assessment community is the need to provide data for linking exposure, dose, and health information in ways that improve environmental surveillance, improve predictive models, and enhance risk assessment and risk management (NAS, 1994). The U.S. Environmental Protection Agency (EPA) Office of Emergency and Remedial Response (OERR) plays a lead role in developing national guidance and planning future activities that support the EPA Superfund Program. OERR is in the process of updating its 1989 Risk Assessment Guidance for Superfund (RAGS) as part of the EPA Superfund reform activities. Volume III of RAGS, when completed in 1999 will provide guidance for conducting probabilistic risk assessments. This revised document will contain technical information including probability density functions (PDFs) and methods used to develop and evaluate these PDFs. The PDFs provided in this EPA document are limited to those relating to exposure factors.

  8. Probability Density Function Characterization for Aggregated Large-Scale Wind Power Based on Weibull Mixtures

    Directory of Open Access Journals (Sweden)

    Emilio Gómez-Lázaro

    2016-02-01

    Full Text Available The Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF for aggregated wind power generation. PDFs of wind power data are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC and the Bayesian information criterion (BIC. Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.

  9. Development and evaluation of probability density functions for a set of human exposure factors

    International Nuclear Information System (INIS)

    The purpose of this report is to describe efforts carried out during 1998 and 1999 at the Lawrence Berkeley National Laboratory to assist the U.S. EPA in developing and ranking the robustness of a set of default probability distributions for exposure assessment factors. Among the current needs of the exposure-assessment community is the need to provide data for linking exposure, dose, and health information in ways that improve environmental surveillance, improve predictive models, and enhance risk assessment and risk management (NAS, 1994). The U.S. Environmental Protection Agency (EPA) Office of Emergency and Remedial Response (OERR) plays a lead role in developing national guidance and planning future activities that support the EPA Superfund Program. OERR is in the process of updating its 1989 Risk Assessment Guidance for Superfund (RAGS) as part of the EPA Superfund reform activities. Volume III of RAGS, when completed in 1999 will provide guidance for conducting probabilistic risk assessments. This revised document will contain technical information including probability density functions (PDFs) and methods used to develop and evaluate these PDFs. The PDFs provided in this EPA document are limited to those relating to exposure factors

  10. Statistical computation of Boltzmann entropy and estimation of the optimal probability density function from statistical sample

    CERN Document Server

    Sui, Ning; He, Ping

    2014-01-01

    In this work, we investigate the statistical computation of the Boltzmann entropy of statistical samples. For this purpose, we use both histogram and kernel function to estimate the probability density function of statistical samples. We find that, due to coarse-graining, the entropy is a monotonic increasing function of the bin width for histogram or bandwidth for kernel estimation, which seems to be difficult to select an optimal bin width/bandwidth for computing the entropy. Fortunately, we notice that there exists a minimum of the first derivative of entropy for both histogram and kernel estimation, and this minimum point of the first derivative asymptotically points to the optimal bin width or bandwidth. We have verified these findings by large amounts of numerical experiments. Hence, we suggest that the minimum of the first derivative of entropy be used as a selector for the optimal bin width or bandwidth of density estimation. Moreover, the optimal bandwidth selected by the minimum of the first derivat...

  11. Robust functional statistics applied to Probability Density Function shape screening of sEMG data.

    Science.gov (United States)

    Boudaoud, S; Rix, H; Al Harrach, M; Marin, F

    2014-01-01

    Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications. PMID:25570426

  12. Robust functional statistics applied to Probability Density Function shape screening of sEMG data.

    Science.gov (United States)

    Boudaoud, S; Rix, H; Al Harrach, M; Marin, F

    2014-01-01

    Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.

  13. Probability Distribution Extraction from TEC Estimates based on Kernel Density Estimation

    Science.gov (United States)

    Demir, Uygar; Toker, Cenk; Çenet, Duygu

    2016-07-01

    Statistical analysis of the ionosphere, specifically the Total Electron Content (TEC), may reveal important information about its temporal and spatial characteristics. One of the core metrics that express the statistical properties of a stochastic process is its Probability Density Function (pdf). Furthermore, statistical parameters such as mean, variance and kurtosis, which can be derived from the pdf, may provide information about the spatial uniformity or clustering of the electron content. For example, the variance differentiates between a quiet ionosphere and a disturbed one, whereas kurtosis differentiates between a geomagnetic storm and an earthquake. Therefore, valuable information about the state of the ionosphere (and the natural phenomena that cause the disturbance) can be obtained by looking at the statistical parameters. In the literature, there are publications which try to fit the histogram of TEC estimates to some well-known pdf.s such as Gaussian, Exponential, etc. However, constraining a histogram to fit to a function with a fixed shape will increase estimation error, and all the information extracted from such pdf will continue to contain this error. In such techniques, it is highly likely to observe some artificial characteristics in the estimated pdf which is not present in the original data. In the present study, we use the Kernel Density Estimation (KDE) technique to estimate the pdf of the TEC. KDE is a non-parametric approach which does not impose a specific form on the TEC. As a result, better pdf estimates that almost perfectly fit to the observed TEC values can be obtained as compared to the techniques mentioned above. KDE is particularly good at representing the tail probabilities, and outliers. We also calculate the mean, variance and kurtosis of the measured TEC values. The technique is applied to the ionosphere over Turkey where the TEC values are estimated from the GNSS measurement from the TNPGN-Active (Turkish National Permanent

  14. Representation of layer-counted proxy records as probability densities on error-free time axes

    Science.gov (United States)

    Boers, Niklas; Goswami, Bedartha; Ghil, Michael

    2016-04-01

    Time series derived from paleoclimatic proxy records exhibit substantial dating uncertainties in addition to the measurement errors of the proxy values. For radiometrically dated proxy archives, Goswami et al. [1] have recently introduced a framework rooted in Bayesian statistics that successfully propagates the dating uncertainties from the time axis to the proxy axis. The resulting proxy record consists of a sequence of probability densities over the proxy values, conditioned on prescribed age values. One of the major benefits of this approach is that the proxy record is represented on an accurate, error-free time axis. Such unambiguous dating is crucial, for instance, in comparing different proxy records. This approach, however, is not directly applicable to proxy records with layer-counted chronologies, as for example ice cores, which are typically dated by counting quasi-annually deposited ice layers. Hence the nature of the chronological uncertainty in such records is fundamentally different from that in radiometrically dated ones. Here, we introduce a modification of the Goswami et al. [1] approach that is specifically designed for layer-counted proxy records, instead of radiometrically dated ones. We apply our method to isotope ratios and dust concentrations in the NGRIP core, using a published 60,000-year chronology [2]. It is shown that the further one goes into the past, the more the layer-counting errors accumulate and lead to growing uncertainties in the probability density sequence for the proxy values that results from the proposed approach. For the older parts of the record, these uncertainties affect more and more a statistically sound estimation of proxy values. This difficulty implies that great care has to be exercised when comparing and in particular aligning specific events among different layer-counted proxy records. On the other hand, when attempting to derive stochastic dynamical models from the proxy records, one is only interested in the

  15. Entrainment Rate in Shallow Cumuli: Dependence on Entrained Dry Air Sources and Probability Density Functions

    Science.gov (United States)

    Lu, C.; Liu, Y.; Niu, S.; Vogelmann, A. M.

    2012-12-01

    In situ aircraft cumulus observations from the RACORO field campaign are used to estimate entrainment rate for individual clouds using a recently developed mixing fraction approach. The entrainment rate is computed based on the observed state of the cloud core and the state of the air that is laterally mixed into the cloud at its edge. The computed entrainment rate decreases when the air is entrained from increasing distance from the cloud core edge; this is because the air farther away from cloud edge is drier than the neighboring air that is within the humid shells around cumulus clouds. Probability density functions of entrainment rate are well fitted by lognormal distributions at different heights above cloud base for different dry air sources (i.e., different source distances from the cloud core edge). Such lognormal distribution functions are appropriate for inclusion into future entrainment rate parameterization in large scale models. To the authors' knowledge, this is the first time that probability density functions of entrainment rate have been obtained in shallow cumulus clouds based on in situ observations. The reason for the wide spread of entrainment rate is that the observed clouds are affected by entrainment mixing processes to different extents, which is verified by the relationships between the entrainment rate and cloud microphysics/dynamics. The entrainment rate is negatively correlated with liquid water content and cloud droplet number concentration due to the dilution and evaporation in entrainment mixing processes. The entrainment rate is positively correlated with relative dispersion (i.e., ratio of standard deviation to mean value) of liquid water content and droplet size distributions, consistent with the theoretical expectation that entrainment mixing processes are responsible for microphysics fluctuations and spectral broadening. The entrainment rate is negatively correlated with vertical velocity and dissipation rate because entrainment

  16. The role of presumed probability density function in the simulation of non premixed turbulent combustion

    CERN Document Server

    Coclite, Alessandro; De Palma, Pietro; Cutrone, Luigi

    2013-01-01

    Flamelet Progress Variable (FPV) combustion models allow the evaluation of all thermo chemical quantities in a reacting flow by computing only the mixture fraction Z and a progress variable C. When using such a method to predict a turbulent combustion in conjunction with a turbulence model, a probability density function (PDF) is required to evaluate statistical averages (e.g., Favre average) of chemical quantities. The choice of the PDF is a compromise between computational costs and accuracy level. The aim of this paper is to investigate the influence of the PDF choice and its modeling aspects in the simulation of non premixed turbulent combustion. Three different models are considered: the standard one, based on the choice of a beta distribution for Z and a Dirac distribution for C; a model employing a beta distribution for both Z and C; a third model obtained using a beta distribution for Z and the statistical most likely distribution (SMLD) for C. The standard model, although widely used, doesn't take in...

  17. Scaling of maximum probability density functions of velocity and temperature increments in turbulent systems

    CERN Document Server

    Huang, Y X; Zhou, Q; Qiu, X; Shang, X D; Lu, Z M; Liu, and Y L

    2014-01-01

    In this paper, we introduce a new way to estimate the scaling parameter of a self-similar process by considering the maximum probability density function (pdf) of tis increments. We prove this for $H$-self-similar processes in general and experimentally investigate it for turbulent velocity and temperature increments. We consider turbulent velocity database from an experimental homogeneous and nearly isotropic turbulent channel flow, and temperature data set obtained near the sidewall of a Rayleigh-B\\'{e}nard convection cell, where the turbulent flow is driven by buoyancy. For the former database, it is found that the maximum value of increment pdf $p_{\\max}(\\tau)$ is in a good agreement with lognormal distribution. We also obtain a scaling exponent $\\alpha\\simeq 0.37$, which is consistent with the scaling exponent for the first-order structure function reported in other studies. For the latter one, we obtain a scaling exponent $\\alpha_{\\theta}\\simeq0.33$. This index value is consistent with the Kolmogorov-Ob...

  18. On the method of logarithmic cumulants for parametric probability density function estimation.

    Science.gov (United States)

    Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

    2013-10-01

    Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible.

  19. Representation of Probability Density Functions from Orbit Determination using the Particle Filter

    Science.gov (United States)

    Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell

    2012-01-01

    Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.

  20. Probability density function and estimation for error of digitized map coordinates in GIS

    Institute of Scientific and Technical Information of China (English)

    童小华; 刘大杰

    2004-01-01

    Traditionally, it is widely accepted that measurement error usually obeys the normal distribution. However, in this paper a new idea is proposed that the error in digitized data which is a major derived data source in GIS does not obey the normal distribution but the p-norm distribution with a determinate parameter. Assuming that the error is random and has the same statistical properties, the probability density function of the normal distribution,Laplace distribution and p-norm distribution are derived based on the arithmetic mean axiom, median axiom and pmedian axiom, which means that the normal distribution is only one of these distributions but not the least one.Based on this idea, distribution fitness tests such as Skewness and Kurtosis coefficient test, Pearson chi-square x2 test and Kolmogorov test for digitized data are conducted. The results show that the error in map digitization obeys the p-norm distribution whose parameter is close to 1.60. A least p-norm estimation and the least square estimation of digitized data are further analyzed, showing that the least p-norm adiustment is better than the least square adjustment for digitized data processing in GIS.

  1. A measurement-driven adaptive probability hypothesis density filter for multitarget tracking

    Institute of Scientific and Technical Information of China (English)

    Si Weijian; Wang Liwei; Qu Zhiyu

    2015-01-01

    This paper studies the dynamic estimation problem for multitarget tracking. A novel gat-ing strategy that is based on the measurement likelihood of the target state space is proposed to improve the overall effectiveness of the probability hypothesis density (PHD) filter. Firstly, a measurement-driven mechanism based on this gating technique is designed to classify the measure-ments. In this mechanism, only the measurements for the existing targets are considered in the update step of the existing targets while the measurements of newborn targets are used for exploring newborn targets. Secondly, the gating strategy enables the development of a heuristic state estima-tion algorithm when sequential Monte Carlo (SMC) implementation of the PHD filter is investi-gated, where the measurements are used to drive the particle clustering within the space gate. The resulting PHD filter can achieve a more robust and accurate estimation of the existing targets by reducing the interference from clutter. Moreover, the target birth intensity can be adaptive to detect newborn targets, which is in accordance with the birth measurements. Simulation results demonstrate the computational efficiency and tracking performance of the proposed algorithm. ? 2015 The Authors. Production and hosting by Elsevier Ltd. on behalf of CSAA&BUAA. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

  2. The role of presumed probability density functions in the simulation of nonpremixed turbulent combustion

    Science.gov (United States)

    Coclite, A.; Pascazio, G.; De Palma, P.; Cutrone, L.

    2016-07-01

    Flamelet-Progress-Variable (FPV) combustion models allow the evaluation of all thermochemical quantities in a reacting flow by computing only the mixture fraction Z and a progress variable C. When using such a method to predict turbulent combustion in conjunction with a turbulence model, a probability density function (PDF) is required to evaluate statistical averages (e. g., Favre averages) of chemical quantities. The choice of the PDF is a compromise between computational costs and accuracy level. The aim of this paper is to investigate the influence of the PDF choice and its modeling aspects to predict turbulent combustion. Three different models are considered: the standard one, based on the choice of a β-distribution for Z and a Dirac-distribution for C; a model employing a β-distribution for both Z and C; and the third model obtained using a β-distribution for Z and the statistically most likely distribution (SMLD) for C. The standard model, although widely used, does not take into account the interaction between turbulence and chemical kinetics as well as the dependence of the progress variable not only on its mean but also on its variance. The SMLD approach establishes a systematic framework to incorporate informations from an arbitrary number of moments, thus providing an improvement over conventionally employed presumed PDF closure models. The rational behind the choice of the three PDFs is described in some details and the prediction capability of the corresponding models is tested vs. well-known test cases, namely, the Sandia flames, and H2-air supersonic combustion.

  3. First-passage-time densities and avoiding probabilities for birth-and-death processes with symmetric sample paths

    OpenAIRE

    Di Crescenzo, Antonio

    1998-01-01

    For truncated birth-and-death processes with two absorbing or two reflecting boundaries, necessary and sufficient conditions on the transition rates are given such that the transition probabilities satisfy a suitable spatial symmetry relation. This allows one to obtain simple expressions for first-passage-time densities and for certain avoiding transition probabilities. An application to an M/M/1 queueing system with two finite sequential queueing rooms of equal sizes is finall...

  4. Accuracy of the non-relativistic approximation to relativistic probability densities for a low-speed weak-gravity system

    Science.gov (United States)

    Liang, Shiuan-Ni; Lan, Boon Leong

    2015-11-01

    The Newtonian and general-relativistic position and velocity probability densities, which are calculated from the same initial Gaussian ensemble of trajectories using the same system parameters, are compared for a low-speed weak-gravity bouncing ball system. The Newtonian approximation to the general-relativistic probability densities does not always break down rapidly if the trajectories in the ensembles are chaotic -- the rapid breakdown occurs only if the initial position and velocity standard deviations are sufficiently small. This result is in contrast to the previously studied single-trajectory case where the Newtonian approximation to a general-relativistic trajectory will always break down rapidly if the two trajectories are chaotic. Similar rapid breakdown of the Newtonian approximation to the general-relativistic probability densities should also occur for other low-speed weak-gravity chaotic systems since it is due to sensitivity to the small difference between the two dynamical theories at low speed and weak gravity. For the bouncing ball system, the breakdown of the Newtonian approximation is transient because the Newtonian and general-relativistic probability densities eventually converge to invariant densities which are close in agreement.

  5. Comparison of Anger camera and BGO mosaic position-sensitive detectors for 'Super ACAR'. Precision electron momentum densities via angular correlation of annihilation radiation

    International Nuclear Information System (INIS)

    We discuss the relative merits of Anger cameras and Bismuth Germanate mosaic counters for measuring the angular correlation of positron annihilation radiation at a facility such as the proposed Positron Factory at Takasaki. The two possibilities appear equally cost effective at this time. (author)

  6. Comparison of Anger camera and BGO mosaic position-sensitive detectors for `Super ACAR`. Precision electron momentum densities via angular correlation of annihilation radiation

    Energy Technology Data Exchange (ETDEWEB)

    Mills, A.P. Jr. [Bell Labs. Murray Hill, NJ (United States); West, R.N.; Hyodo, Toshio

    1997-03-01

    We discuss the relative merits of Anger cameras and Bismuth Germanate mosaic counters for measuring the angular correlation of positron annihilation radiation at a facility such as the proposed Positron Factory at Takasaki. The two possibilities appear equally cost effective at this time. (author)

  7. Kinetic and dynamic probability-density-function descriptions of disperse turbulent two-phase flows.

    Science.gov (United States)

    Minier, Jean-Pierre; Profeta, Christophe

    2015-11-01

    This article analyzes the status of two classical one-particle probability density function (PDF) descriptions of the dynamics of discrete particles dispersed in turbulent flows. The first PDF formulation considers only the process made up by particle position and velocity Z(p)=(x(p),U(p)) and is represented by its PDF p(t; y(p),V(p)) which is the solution of a kinetic PDF equation obtained through a flux closure based on the Furutsu-Novikov theorem. The second PDF formulation includes fluid variables into the particle state vector, for example, the fluid velocity seen by particles Z(p)=(x(p),U(p),U(s)), and, consequently, handles an extended PDF p(t; y(p),V(p),V(s)) which is the solution of a dynamic PDF equation. For high-Reynolds-number fluid flows, a typical formulation of the latter category relies on a Langevin model for the trajectories of the fluid seen or, conversely, on a Fokker-Planck equation for the extended PDF. In the present work, a new derivation of the kinetic PDF equation is worked out and new physical expressions of the dispersion tensors entering the kinetic PDF equation are obtained by starting from the extended PDF and integrating over the fluid seen. This demonstrates that, under the same assumption of a Gaussian colored noise and irrespective of the specific stochastic model chosen for the fluid seen, the kinetic PDF description is the marginal of a dynamic PDF one. However, a detailed analysis reveals that kinetic PDF models of particle dynamics in turbulent flows described by statistical correlations constitute incomplete stand-alone PDF descriptions and, moreover, that present kinetic-PDF equations are mathematically ill posed. This is shown to be the consequence of the non-Markovian characteristic of the stochastic process retained to describe the system and the use of an external colored noise. Furthermore, developments bring out that well-posed PDF descriptions are essentially due to a proper choice of the variables selected to

  8. Kinetic and dynamic probability-density-function descriptions of disperse turbulent two-phase flows

    Science.gov (United States)

    Minier, Jean-Pierre; Profeta, Christophe

    2015-11-01

    This article analyzes the status of two classical one-particle probability density function (PDF) descriptions of the dynamics of discrete particles dispersed in turbulent flows. The first PDF formulation considers only the process made up by particle position and velocity Zp=(xp,Up) and is represented by its PDF p (t ;yp,Vp) which is the solution of a kinetic PDF equation obtained through a flux closure based on the Furutsu-Novikov theorem. The second PDF formulation includes fluid variables into the particle state vector, for example, the fluid velocity seen by particles Zp=(xp,Up,Us) , and, consequently, handles an extended PDF p (t ;yp,Vp,Vs) which is the solution of a dynamic PDF equation. For high-Reynolds-number fluid flows, a typical formulation of the latter category relies on a Langevin model for the trajectories of the fluid seen or, conversely, on a Fokker-Planck equation for the extended PDF. In the present work, a new derivation of the kinetic PDF equation is worked out and new physical expressions of the dispersion tensors entering the kinetic PDF equation are obtained by starting from the extended PDF and integrating over the fluid seen. This demonstrates that, under the same assumption of a Gaussian colored noise and irrespective of the specific stochastic model chosen for the fluid seen, the kinetic PDF description is the marginal of a dynamic PDF one. However, a detailed analysis reveals that kinetic PDF models of particle dynamics in turbulent flows described by statistical correlations constitute incomplete stand-alone PDF descriptions and, moreover, that present kinetic-PDF equations are mathematically ill posed. This is shown to be the consequence of the non-Markovian characteristic of the stochastic process retained to describe the system and the use of an external colored noise. Furthermore, developments bring out that well-posed PDF descriptions are essentially due to a proper choice of the variables selected to describe physical systems

  9. Auxiliary results for "Nonparametric kernel estimation of the probability density function of regression errors using estimated residuals"

    CERN Document Server

    Samb, Rawane

    2012-01-01

    This manuscript is a supplemental document providing the omitted material for our paper entitled "Nonparametric kernel estimation of the probability density function of regression errors using estimated residuals" [arXiv:1010.0439]. The paper is submitted to Journal of Nonparametric Statistics.

  10. Word Recognition and Nonword Repetition in Children with Language Disorders: The Effects of Neighborhood Density, Lexical Frequency, and Phonotactic Probability

    Science.gov (United States)

    Rispens, Judith; Baker, Anne; Duinmeijer, Iris

    2015-01-01

    Purpose: The effects of neighborhood density (ND) and lexical frequency on word recognition and the effects of phonotactic probability (PP) on nonword repetition (NWR) were examined to gain insight into processing at the lexical and sublexical levels in typically developing (TD) children and children with developmental language problems. Method:…

  11. Monte Carlo method for computing density of states and quench probability of potential energy and enthalpy landscapes.

    Science.gov (United States)

    Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra; Raghavan, Srikanth

    2007-05-21

    The thermodynamics and kinetics of a many-body system can be described in terms of a potential energy landscape in multidimensional configuration space. The partition function of such a landscape can be written in terms of a density of states, which can be computed using a variety of Monte Carlo techniques. In this paper, a new self-consistent Monte Carlo method for computing density of states is described that uses importance sampling and a multiplicative update factor to achieve rapid convergence. The technique is then applied to compute the equilibrium quench probability of the various inherent structures (minima) in the landscape. The quench probability depends on both the potential energy of the inherent structure and the volume of its corresponding basin in configuration space. Finally, the methodology is extended to the isothermal-isobaric ensemble in order to compute inherent structure quench probabilities in an enthalpy landscape.

  12. Criticality of the net-baryon number probability distribution at finite density

    OpenAIRE

    Kenji Morita; Bengt Friman; Krzysztof Redlich

    2014-01-01

    We compute the probability distribution $P(N)$ of the net-baryon number at finite temperature and quark-chemical potential, $\\mu$, at a physical value of the pion mass in the quark-meson model within the functional renormalization group scheme. For $\\mu/T

  13. Criticality of the net-baryon number probability distribution at finite density

    OpenAIRE

    Morita, Kenji; Friman, Bengt; Redlich, Krzysztof

    2015-01-01

    We compute the probability distribution P(N) of the net-baryon number at finite temperature and quark-chemical potential, μ , at a physical value of the pion mass in the quark-meson model within the functional renormalization group scheme. For μ/T

  14. Simulation Of Wave Function And Probability Density Of Modified Poschl Teller Potential Derived Using Supersymmetric Quantum Mechanics

    Science.gov (United States)

    Angraini, Lily Maysari; Suparmi, Variani, Viska Inda

    2010-12-01

    SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.

  15. Complex Probability Distributions A Solution for the Long-Standing Problem of QCD at Finite Density

    CERN Document Server

    Azcoiti, V

    1996-01-01

    We show how the prescription of taking the absolute value of the fermion determinant in the integration measure of QCD at finite density, forgetting its phase, reproduces the correct thermodynamical limit. This prescription, which applies also to other gauge theories with non-positive-definite integration measure, also has the advantage of killing finite size effects due to extremely small mean values of the cosine of the phase of the fermion determinant. We also give an explanation for the pathological behaviour of quenched QCD at finite density.

  16. Column Density Probability Distribution Functions in Turbulent Molecular Clouds A Comparison between Theory and Observations

    CERN Document Server

    Burkert, A; Burkert, Andreas; Low, Mordecai-Mark Mac

    2001-01-01

    The one-point statistics of column density distributions of turbulent molecular cloud models are investigated and compared with observations. In agreement with the observations, the number N of pixels with surface density S is distributed exponentially N(S)=exp(-S/S0) in models of driven compressible supersonic turbulence. However, in contrast to the observations, the exponential slope defined by S0 is not universal but instead depends strongly on the adopted rms Mach number and on the smoothing of the data cube. We demonstrate that this problem can be solved if one restricts the analysis of the surface density distribution to subregions with sizes equal to the correlation length of the flow which turns out to be given by the driving scale. In this case, the column density distributions are universal with a slope that is in excellent agreement with the observations and independent of the Mach number or smoothing. The observed molecular clouds therefore are coherent structures with sizes of order their correla...

  17. Criticality of the net-baryon number probability distribution at finite density

    CERN Document Server

    Morita, Kenji; Redlich, Krzysztof

    2014-01-01

    We compute the probability distribution $P(N)$ of the net-baryon number at finite temperature and quark-chemical potential, $\\mu$, at a physical value of the pion mass in the quark-meson model within the functional renormalization group scheme. For $\\mu/T<1$, the model exhibits the chiral crossover transition which belongs to the universality class of the $O(4)$ spin system in three dimensions. We explore the influence of the chiral crossover transition on the properties of the net baryon number probability distribution, $P(N)$. By considering ratios of $P(N)$ to the Skellam function, with the same mean and variance, we unravel the characteristic features of the distribution that are related to $O(4)$ criticality at the chiral crossover transition. We explore the corresponding ratios for data obtained at RHIC by the STAR Collaboration and discuss their implications. We also examine $O(4)$ criticality in the context of binomial and negative-binomial distributions for the net proton number.

  18. Criticality of the net-baryon number probability distribution at finite density

    Directory of Open Access Journals (Sweden)

    Kenji Morita

    2015-02-01

    Full Text Available We compute the probability distribution P(N of the net-baryon number at finite temperature and quark-chemical potential, μ, at a physical value of the pion mass in the quark-meson model within the functional renormalization group scheme. For μ/T<1, the model exhibits the chiral crossover transition which belongs to the universality class of the O(4 spin system in three dimensions. We explore the influence of the chiral crossover transition on the properties of the net baryon number probability distribution, P(N. By considering ratios of P(N to the Skellam function, with the same mean and variance, we unravel the characteristic features of the distribution that are related to O(4 criticality at the chiral crossover transition. We explore the corresponding ratios for data obtained at RHIC by the STAR Collaboration and discuss their implications. We also examine O(4 criticality in the context of binomial and negative-binomial distributions for the net proton number.

  19. Probability Density Estimation for Non-flat Functions%非平坦函数概率密度估计

    Institute of Scientific and Technical Information of China (English)

    汪洪桥; 蔡艳宁; 付光远; 王仕成

    2016-01-01

    Aiming at the probability density estimation problem for non-flat functions, this paper constructs a single slack factor multi-scale kernel support vector machine (SVM) probability density estimation model, by improving the form of constraint condition of the traditional SVM model and introducing the multi-scale kernel method. In the model, a single slack factor instead of two types of slack factors is used to control the learning error of SVM, which reduces the computational complexity of model. At the same time, by introducing the multi-scale kernel method, the model can well fit the functions with both the fiercely changed region and the flatly changed region. Through several probability density estimation experiments with typical non-flat functions, the results show that the single slack probability density estimation model has faster learning speed than the common SVM model. And compared with the single kernel method, the multi-scale kernel SVM probability density estimation model has better estimation precision.%针对非平坦函数的概率密度估计问题,通过改进支持向量机(support vector machine,SVM)概率密度估计模型约束条件的形式,并引入多尺度核方法,构建了一种单松弛因子多尺度核支持向量机概率密度估计模型。该模型采用合并的单个松弛因子来控制支持向量机的学习误差,减小了模型的计算复杂度;同时引入了多尺度核方法,使得模型既能适应函数剧烈变化的区域,也能适应平缓变化的区域。基于几种典型非平坦函数进行概率密度估计实验,结果证明,单松弛因子概率密度估计模型比常规支持向量机概率密度估计模型具有更快的学习速度;且相比于单核方法,多尺度核支持向量机概率密度估计模型具有更优的估计精度。

  20. A summary of transition probabilities for atomic absorption lines formed in low-density clouds

    Science.gov (United States)

    Morton, D. C.; Smith, W. H.

    1973-01-01

    A table of wavelengths, statistical weights, and excitation energies is given for 944 atomic spectral lines in 221 multiplets whose lower energy levels lie below 0.275 eV. Oscillator strengths were adopted for 635 lines in 155 multiplets from the available experimental and theoretical determinations. Radiation damping constants also were derived for most of these lines. This table contains the lines most likely to be observed in absorption in interstellar clouds, circumstellar shells, and the clouds in the direction of quasars where neither the particle density nor the radiation density is high enough to populate the higher levels. All ions of all elements from hydrogen to zinc are included which have resonance lines longward of 912 A, although a number of weaker lines of neutrals and first ions have been omitted.

  1. Probability density functions for the variable solar wind near the solar cycle minimum

    CERN Document Server

    Vörös,; Leitner, M; Narita, Y; Consolini, G; Kovács, P; Tóth, A; Lichtenberger, J

    2015-01-01

    Unconditional and conditional statistics is used for studying the histograms of magnetic field multi-scale fluctuations in the solar wind near the solar cycle minimum in 2008. The unconditional statistics involves the magnetic data during the whole year 2008. The conditional statistics involves the magnetic field time series splitted into concatenated subsets of data according to a threshold in dynamic pressure. The threshold separates fast stream leading edge compressional and trailing edge uncompressional fluctuations. The histograms obtained from these data sets are associated with both large-scale (B) and small-scale ({\\delta}B) magnetic fluctuations, the latter corresponding to time-delayed differences. It is shown here that, by keeping flexibility but avoiding the unnecessary redundancy in modeling, the histograms can be effectively described by a limited set of theoretical probability distribution functions (PDFs), such as the normal, log-normal, kappa and logkappa functions. In a statistical sense the...

  2. Precipitation Study in Inconel 625 Alloy by Positron Annihilation Spectroscopy

    Institute of Scientific and Technical Information of China (English)

    M.Ahmad; W. Ahmad; M.A.Shaikh; Mahmud Ahmad; M.U. Rajput

    2003-01-01

    Precipitation in Inconel 625 alloy has been studied by positron annihilation spectroscopy and electron microscopy. The observeddependence of annihilation characteristics on aging time is attributed to the change of the positron state due to the increaseand decrease of the density and size of the γ″ precipitates. Hardness measurements and lifetime measurements are in goodagreement.

  3. On the thresholds, probability densities, and critical exponents of Bak-Sneppen-like models

    Science.gov (United States)

    Garcia, Guilherme J. M.; Dickman, Ronald

    2004-10-01

    We report a simple method to accurately determine the threshold and the exponent ν of the Bak-Sneppen (BS) model and also investigate the BS universality class. For the random-neighbor version of the BS model, we find the threshold x ∗=0.33332(3) , in agreement with the exact result x ∗= {1}/{3} given by mean-field theory. For the one-dimensional original model, we find x ∗=0.6672(2) in good agreement with the results reported in the literature; for the anisotropic BS model we obtain x ∗=0.7240(1) . We study the finite size effect x ∗(L)-x ∗(L→∞)∝L -ν, observed in a system with L sites, and find ν=1.00(1) for the random-neighbor version, ν=1.40(1) for the original model, and ν=1.58(1) for the anisotropic case. Finally, we discuss the effect of defining the extremal site as the one which minimizes a general function f( x), instead of simply f( x)= x as in the original updating rule. We emphasize that models with extremal dynamics have singular stationary probability distributions p( x). Our simulations indicate the existence of two symmetry-based universality classes.

  4. An extended SMLD approach for presumed probability density function in flamelet combustion model

    CERN Document Server

    Coclite, Alessandro; De Palma, Pietro; Cutrone, Luigi

    2013-01-01

    This paper provides an extension of the standard flamelet progress variable (FPV) approach for turbulent combustion, applying the statistically most likely distribution (SMLD) framework to the joint PDF of the mixture fraction, Z, and the progress variable, C. In this way one does not need to make any assumption about the statistical correlation between Z and C and about the behaviour of the mixture fraction, as required in previous FPV models. In fact, for state-of-the-art models, with the assumption of very-fast-chemistry,Z is widely accepted to behave as a passive scalar characterized by a $\\beta$-distribution function. Instead, the model proposed here, evaluates the most probable joint distribution of Z and C without any assumption on their behaviour and provides an effective tool to verify the adequateness of widely used hypotheses, such as their statistical independence. The model is validated versus three well-known test cases, namely, the Sandia flames. The results are compared with those obtained by ...

  5. Non-stationary random vibration analysis of a 3D train-bridge system using the probability density evolution method

    Science.gov (United States)

    Yu, Zhi-wu; Mao, Jian-feng; Guo, Feng-qi; Guo, Wei

    2016-03-01

    Rail irregularity is one of the main sources causing train-bridge random vibration. A new random vibration theory for the coupled train-bridge systems is proposed in this paper. First, number theory method (NTM) with 2N-dimensional vectors for the stochastic harmonic function (SHF) of rail irregularity power spectrum density was adopted to determine the representative points of spatial frequencies and phases to generate the random rail irregularity samples, and the non-stationary rail irregularity samples were modulated with the slowly varying function. Second, the probability density evolution method (PDEM) was employed to calculate the random dynamic vibration of the three-dimensional (3D) train-bridge system by a program compiled on the MATLAB® software platform. Eventually, the Newmark-β integration method and double edge difference method of total variation diminishing (TVD) format were adopted to obtain the mean value curve, the standard deviation curve and the time-history probability density information of responses. A case study was presented in which the ICE-3 train travels on a three-span simply-supported high-speed railway bridge with excitation of random rail irregularity. The results showed that compared to the Monte Carlo simulation, the PDEM has higher computational efficiency for the same accuracy, i.e., an improvement by 1-2 orders of magnitude. Additionally, the influences of rail irregularity and train speed on the random vibration of the coupled train-bridge system were discussed.

  6. Exact Probability with Perturbative Form for nu_mu -> nu_e Oscillations in Matter of Constant Density

    CERN Document Server

    Mann, W Anthony; Schneps, Jacob; Altinok, Ozgur

    2012-01-01

    We give an exact expression for the appearance probability P(nu_mu -> nu_e) describing neutrino oscillations in matter of constant density, derived using textbook quantum mechanics stratagems. Our formulation retains the clarity of an expansion in alpha = Delta m_{21}^2/Delta m_{31}^2 and sin2theta_{13} exhibited by the popular Cervera et al. formula [Nucl. Phys. B {\\bf 579}, 17 (2000)] while providing greater precision for analytic evaluation of terrestrial oscillation baselines of thousands of kilometers.

  7. Production bias and cluster annihilation: Why necessary?

    DEFF Research Database (Denmark)

    Singh, B.N.; Trinkaus, H.; Woo, C.H.

    1994-01-01

    the primary cluster density is high. Therefore, a sustained high swelling rate driven by production bias must involve the annihilation of primary clusters at sinks. A number of experimental observations which are unexplainable in terms of the conventional dislocation bias for monointerstitials is...

  8. GEO objects spatial density and collision probability in the Earth-centered Earth-fixed (ECEF) coordinate system

    Science.gov (United States)

    Dongfang, Wang; Baojun, Pang; Weike, Xiao; Keke, Peng

    2016-01-01

    The geostationary (GEO) ring is a valuable orbital region contaminated with an alarming number of space debris. Due to its particular orbital characters, the GEO objects spatial distribution is very susceptible to local longitude regions. Therefore the local longitude distribution of these objects in the Earth-centered Earth-fixed (ECEF) coordinate system is much more stable and useful in practical applications than it is in the J2000 inertial coordinate system. In previous studies of space debris environment models, the spatial density is calculated in the J2000 coordinate system, which makes it impossible to identify the spatial distribution in different local longitude regions. For GEO objects, this may bring potent inaccuracy. In order to describe the GEO objects spatial distribution in different local longitude regions, this paper introduced a new method which can provide the spatial density distribution in the ECEF coordinate system. Based on 2014/12/10 two line element (TLE) data provided by the US Space Surveillance Network, the spatial density of cataloged GEO objects are given in the ECEF coordinate system. Combined with the previous studies of "Cube" collision probability evaluation, the GEO region collision probability in the ECEF coordinate system is also given here. The examination reveals that GEO space debris distribution is not uniform by longitude; it is relatively centered about the geopotential wells. The method given in this paper is also suitable for smaller debris in the GEO region. Currently the longitudinal-dependent analysis is not represented in GEO debris models such as ORDEM or MASTER. Based our method the further version of space debris environment engineering model (SDEEM) developed by China will present a longitudinal independent GEO space debris environment description in the ECEF coordinate system.

  9. Antineutron-nucleus annihilation

    CERN Document Server

    Botta, E

    2001-01-01

    The n-nucleus annihilation process has been studied by the OBELIX experiment at the CERN Low Energy Antiproton Ring (LEAR) in the (50-400) MeV/c projectile momentum range on C, Al, Cu, Ag, Sn, and Pb nuclear targets. A systematic survey of the annihilation cross- section, sigma /sub alpha /(A, p/sub n/), has been performed, obtaining information on its dependence on the target mass number and on the incoming n momentum. For the first time the mass number dependence of the (inclusive) final state composition of the process has been analyzed. Production of the rho vector meson has also been examined. (13 refs).

  10. Charged-Particle Thermonuclear Reaction Rates: II. Tables and Graphs of Reaction Rates and Probability Density Functions

    CERN Document Server

    Iliadis, Christian; Champagne, Art; Coc, Alain; Fitzgerald, Ryan

    2010-01-01

    Numerical values of charged-particle thermonuclear reaction rates for nuclei in the A=14 to 40 region are tabulated. The results are obtained using a method, based on Monte Carlo techniques, that has been described in the preceding paper of this series (Paper I). We present a low rate, median rate and high rate which correspond to the 0.16, 0.50 and 0.84 quantiles, respectively, of the cumulative reaction rate distribution. The meaning of these quantities is in general different from the commonly reported, but statistically meaningless expressions, "lower limit", "nominal value" and "upper limit" of the total reaction rate. In addition, we approximate the Monte Carlo probability density function of the total reaction rate by a lognormal distribution and tabulate the lognormal parameters {\\mu} and {\\sigma} at each temperature. We also provide a quantitative measure (Anderson-Darling test statistic) for the reliability of the lognormal approximation. The user can implement the approximate lognormal reaction rat...

  11. Efficient simulation of density and probability of large deviations of sum of random vectors using saddle point representations

    CERN Document Server

    Dey, Santanu

    2012-01-01

    We consider the problem of efficient simulation estimation of the density function at the tails, and the probability of large deviations for a sum of independent, identically distributed, light-tailed and non-lattice random vectors. The latter problem besides being of independent interest, also forms a building block for more complex rare event problems that arise, for instance, in queuing and financial credit risk modeling. It has been extensively studied in literature where state independent exponential twisting based importance sampling has been shown to be asymptotically efficient and a more nuanced state dependent exponential twisting has been shown to have a stronger bounded relative error property. We exploit the saddle-point based representations that exist for these rare quantities, which rely on inverting the characteristic functions of the underlying random vectors. We note that these representations reduce the rare event estimation problem to evaluating certain integrals, which may via importance ...

  12. On the Probability Density Function of the Test Statistic for One Nonlinear GLR Detector Arising from fMRI

    Directory of Open Access Journals (Sweden)

    Fangyuan Nan

    2007-01-01

    Full Text Available Recently an important and interesting nonlinear generalized likelihood ratio (GLR detector emerged in functional magnetic resonance imaging (fMRI data processing. However, the study of that detector is incomplete: the probability density function (pdf of the test statistic was draw from numerical simulations without much theoretical support and is therefore, not firmly grounded. This correspondence presents more accurate (asymptotic closed form of the pdf by resorting to a non-central Wishart matrix and by asymptotic expansion of some integrals. It is then confirmed theoretically that the detector does possess constant false alarm rate (CFAR property under some practical regimes of signal to noise ratio (SNR for finite samples and the correct threshold selection method is given, which is very important for real fMRI data processing.

  13. Ignition Probability

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — USFS, State Forestry, BLM, and DOI fire occurrence point locations from 1987 to 2008 were combined and converted into a fire occurrence probability or density grid...

  14. A Herschel-SPIRE survey of the Mon R2 giant molecular cloud: analysis of the gas column density probability density function

    Science.gov (United States)

    Pokhrel, R.; Gutermuth, R.; Ali, B.; Megeath, T.; Pipher, J.; Myers, P.; Fischer, W. J.; Henning, T.; Wolk, S. J.; Allen, L.; Tobin, J. J.

    2016-09-01

    We present a far-IR survey of the entire Mon R2 giant molecular cloud (GMC) with Herschel-Spectral and Photometric Imaging REceiver cross-calibrated with Planck-High Frequency Instrument data. We fit the spectral energy distributions of each pixel with a greybody function and an optimal beta value of 1.8. We find that mid-range column densities obtained from far-IR dust emission and near-IR extinction are consistent. For the entire GMC, we find that the column density histogram, or column density probability distribution function (N-PDF), is lognormal below ˜1021 cm-2. Above this value, the distribution takes a power law form with an index of -2.15. We analyse the gas geometry, N-PDF shape, and young stellar object (YSO) content of a selection of subregions in the cloud. We find no regions with pure lognormal N-PDFs. The regions with a combination of lognormal and one power-law N-PDF have a YSO cluster and a corresponding centrally concentrated gas clump. The regions with a combination of lognormal and two power-law N-PDF have significant numbers of typically younger YSOs but no prominent YSO cluster. These regions are composed of an aggregate of closely spaced gas filaments with no concentrated dense gas clump. We find that for our fixed scale regions, the YSO count roughly correlates with the N-PDF power-law index. The correlation appears steeper for single power-law regions relative to two power-law regions with a high column density cut-off, as a greater dense gas mass fraction is achieved in the former. A stronger correlation is found between embedded YSO count and the dense gas mass among our regions.

  15. Positron annihilation microprobe

    Energy Technology Data Exchange (ETDEWEB)

    Canter, K.F. [Brandeis Univ., Waltham, MA (United States)

    1997-03-01

    Advances in positron annihilation microprobe development are reviewed. The present resolution achievable is 3 {mu}m. The ultimate resolution is expected to be 0.1 {mu}m which will enable the positron microprobe to be a valuable tool in the development of 0.1 {mu}m scale electronic devices in the future. (author)

  16. Fidelity and breeding probability related to population density and individual quality in black brent geese Branta bernicla nigricans

    Science.gov (United States)

    Sedinger, J.S.; Chelgren, N.D.; Ward, D.H.; Lindberg, M.S.

    2008-01-01

    1. Patterns of temporary emigration (associated with non-breeding) are important components of variation in individual quality. Permanent emigration from the natal area has important implications for both individual fitness and local population dynamics. 2. We estimated both permanent and temporary emigration of black brent geese (Branta bernicla nigricans Lawrence) from the Tutakoke River colony, using observations of marked brent geese on breeding and wintering areas, and recoveries of ringed individuals by hunters. We used the likelihood developed by Lindberg, Kendall, Hines & Anderson 2001 (Combining band recovery data and Pollock's robust design to model temporary and permanent emigration. Biometrics, 57, 273-281) to assess hypotheses and estimate parameters. 3. Temporary emigration (the converse of breeding) varied among age classes up to age 5, and differed between individuals that bred in the previous years vs. those that did not. Consistent with the hypothesis of variation in individual quality, individuals with a higher probability of breeding in one year also had a higher probability of breeding the next year. 4. Natal fidelity of females ranged from 0.70 ?? 0.07-0.96 ?? 0.18 and averaged 0.83. In contrast to Lindberg et al. (1998), we did not detect a relationship between fidelity and local population density. Natal fidelity was negatively correlated with first-year survival, suggesting that competition among individuals of the same age for breeding territories influenced dispersal. Once females nested at the Tutakoke River, colony breeding fidelity was 1.0. 5. Our analyses show substantial variation in individual quality associated with fitness, which other analyses suggest is strongly influenced by early environment. Our analyses also suggest substantial interchange among breeding colonies of brent geese, as first shown by Lindberg et al. (1998).

  17. Black Hole Window into p-Wave Dark Matter Annihilation.

    Science.gov (United States)

    Shelton, Jessie; Shapiro, Stuart L; Fields, Brian D

    2015-12-01

    We present a new method to measure or constrain p-wave-suppressed cross sections for dark matter (DM) annihilations inside the steep density spikes induced by supermassive black holes. We demonstrate that the high DM densities, together with the increased velocity dispersion, within such spikes combine to make thermal p-wave annihilation cross sections potentially visible in γ-ray observations of the Galactic center (GC). The resulting DM signal is a bright central point source with emission originating from DM annihilations in the absence of a detectable spatially extended signal from the halo. We define two simple reference theories of DM with a thermal p-wave annihilation cross section and establish new limits on the combined particle and astrophysical parameter space of these models, demonstrating that Fermi Large Area Telescope is currently sensitive to thermal p-wave DM over a wide range of possible scenarios for the DM distribution in the GC. PMID:26684108

  18. Black Hole Window into p-Wave Dark Matter Annihilation.

    Science.gov (United States)

    Shelton, Jessie; Shapiro, Stuart L; Fields, Brian D

    2015-12-01

    We present a new method to measure or constrain p-wave-suppressed cross sections for dark matter (DM) annihilations inside the steep density spikes induced by supermassive black holes. We demonstrate that the high DM densities, together with the increased velocity dispersion, within such spikes combine to make thermal p-wave annihilation cross sections potentially visible in γ-ray observations of the Galactic center (GC). The resulting DM signal is a bright central point source with emission originating from DM annihilations in the absence of a detectable spatially extended signal from the halo. We define two simple reference theories of DM with a thermal p-wave annihilation cross section and establish new limits on the combined particle and astrophysical parameter space of these models, demonstrating that Fermi Large Area Telescope is currently sensitive to thermal p-wave DM over a wide range of possible scenarios for the DM distribution in the GC.

  19. Regression approaches to derive generic and fish group-specific probability density functions of bioconcentration factors for metals.

    Science.gov (United States)

    Tanaka, Taku; Ciffroy, Philippe; Stenberg, Kristofer; Capri, Ettore

    2010-11-01

    In the framework of environmental multimedia modeling studies dedicated to environmental and health risk assessments of chemicals, the bioconcentration factor (BCF) is a parameter commonly used, especially for fish. As for neutral lipophilic substances, it is assumed that BCF is independent of exposure levels of the substances. However, for metals some studies found the inverse relationship between BCF values and aquatic exposure concentrations for various aquatic species and metals, and also high variability in BCF data. To deal with the factors determining BCF for metals, we conducted regression analyses to evaluate the inverse relationships and introduce the concept of probability density function (PDF) for Cd, Cu, Zn, Pb, and As. In the present study, for building the regression model and derive the PDF of fish BCF, two statistical approaches are applied: ordinary regression analysis to estimate a regression model that does not consider the variation in data across different fish family groups; and hierarchical Bayesian regression analysis to estimate fish group-specific regression models. The results show that the BCF ranges and PDFs estimated for metals by both statistical approaches have less uncertainty than the variation of collected BCF data (the uncertainty is reduced by 9%-61%), and thus such PDFs proved to be useful to obtain accurate model predictions for environmental and health risk assessment concerning metals. PMID:20886641

  20. A biology-driven receptor model for daily pollen allergy risk in Korea based on Weibull probability density function

    Science.gov (United States)

    Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo

    2016-07-01

    Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.

  1. Bubble chamber: antiproton annihilation

    CERN Multimedia

    1971-01-01

    These images show real particle tracks from the annihilation of an antiproton in the 80 cm Saclay liquid hydrogen bubble chamber. A negative kaon and a neutral kaon are produced in this process, as well as a positive pion. The invention of bubble chambers in 1952 revolutionized the field of particle physics, allowing real tracks left by particles to be seen and photographed by expanding liquid that had been heated to boiling point.

  2. WIMP annihilation effects on primordial star formation

    CERN Document Server

    Ripamonti, E; Bressan, A; Schneider, R; Ferrara, A; Marigo, P

    2009-01-01

    We study the effects of WIMP dark matter (DM) annihilations on the thermal and chemical evolution of the gaseous clouds where the first generation of stars in the Universe is formed. We follow the collapse of the gas inside a typical halo virializing at very high redshift, from well before virialization until a stage where the heating from DM annihilations exceeds the gas cooling rate. The DM energy input is estimated by inserting the energy released by DM annihilations (as predicted by an adiabatic contraction of the original DM profile) in a spherically symmetric radiative transfer scheme. In addition to the heating effects of the energy absorbed, we include its feedback upon the chemical properties of the gas, which is critical to determine the cooling rate in the halo, and hence the fragmentation scale and Jeans mass of the first stars. We find that DM annihilation does alter the free electron and especially the H2 fraction when the gas density is n>~ 10^4 cm^-3, for our fiducial parameter values. However...

  3. Matching Condition in γ-γ Annihilation Absorption

    Institute of Scientific and Technical Information of China (English)

    LIU Dang-Bo; CHEN Lei; LING Jia-Jie; YOU Jun-Han

    2005-01-01

    @@ Two-photon annihilation (γ-γ reaction) is an important absorption mechanism in γ-ray physics and γ-ray astronomy. Using the markedly simplified direction-averaged cross section of annihilation σ(ω, ω') for a normal isotropic ambient radiation field around the γ-ray source, we obtain a matching condition for the energies of two interacting photons, which ensures the attainment of the maximum annihilation probability. This is a new result that is helpful to obtain a better understanding for the absorption behaviour in the γ-γ annihilation process,and this predicts some possible line-like absorption structures in the emergent γ-ray continuous spectra. Some inferences of the matching condition are also presented.

  4. EUPDF: Eulerian Monte Carlo Probability Density Function Solver for Applications With Parallel Computing, Unstructured Grids, and Sprays

    Science.gov (United States)

    Raju, M. S.

    1998-01-01

    The success of any solution methodology used in the study of gas-turbine combustor flows depends a great deal on how well it can model the various complex and rate controlling processes associated with the spray's turbulent transport, mixing, chemical kinetics, evaporation, and spreading rates, as well as convective and radiative heat transfer and other phenomena. The phenomena to be modeled, which are controlled by these processes, often strongly interact with each other at different times and locations. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. The influence of turbulence in a diffusion flame manifests itself in several forms, ranging from the so-called wrinkled, or stretched, flamelets regime to the distributed combustion regime, depending upon how turbulence interacts with various flame scales. Conventional turbulence models have difficulty treating highly nonlinear reaction rates. A solution procedure based on the composition joint probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices (such as extinction, blowoff limits, and emissions predictions) because it can account for nonlinear chemical reaction rates without making approximations. In an attempt to advance the state-of-the-art in multidimensional numerical methods, we at the NASA Lewis Research Center extended our previous work on the PDF method to unstructured grids, parallel computing, and sprays. EUPDF, which was developed by M.S. Raju of Nyma, Inc., was designed to be massively parallel and could easily be coupled with any existing gas-phase and/or spray solvers. EUPDF can use an unstructured mesh with mixed triangular, quadrilateral, and/or tetrahedral elements. The application of the PDF method showed favorable results when applied to several supersonic

  5. Model assembly for estimating cell surviving fraction for both targeted and nontargeted effects based on microdosimetric probability densities.

    Directory of Open Access Journals (Sweden)

    Tatsuhiko Sato

    Full Text Available We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells and Neo cells (neomycin resistant gene-expressing HeLa cells irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities.

  6. Model assembly for estimating cell surviving fraction for both targeted and nontargeted effects based on microdosimetric probability densities.

    Science.gov (United States)

    Sato, Tatsuhiko; Hamada, Nobuyuki

    2014-01-01

    We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK) model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells) and Neo cells (neomycin resistant gene-expressing HeLa cells) irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET) values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities.

  7. Scaling of maximum probability density function of velocity increments in tur-bulent Rayleigh-Bénard convection

    Institute of Scientific and Technical Information of China (English)

    邱翔; 黄永祥; 周全; 孙超

    2014-01-01

    In this paper, we apply a scaling analysis of the maximum of the probability density function (pdf) of velocity increments, i.e., pmax ( )=max utp ( u ) att t-D D : , for a velocity field of turbulent Rayleigh-Bénard convection obtained at the Taylor-microscale Reynolds number Rel»60 . The scaling exponent a is comparable with that of the first-order velocity structure function, z(1) , in which the large-scale effect might be constrained, showing the background fluctuations of the velocity field. It is found that the integral time T(x/D) scales as T(x/D):(x/D)-b, with a scaling exponent b=0.25±0.01, suggesting the large-scale inhomo-geneity of the flow. Moreover, the pdf scaling exponent a(x, z) is strongly inhomogeneous in the x (horizontal) direction. The vertical-direction-averaged pdf scaling exponent a%( x) obeys a logarithm law with respect to x , the distance from the cell sidewall, with a scaling exponent x»0.22 within the velocity boundary layer and x»0.28 near the cell sidewall. In the cell's central region, a(x, z) fluctuates around 0.37, which agrees well with z(1) obtained in high-Reynolds-number turbulent flows, implying the same intermittent correction. Moreover, the length of the inertial range represented in decade T%I (x) is found to be linearly increasing with the wall distance x with an exponent 0.65±0.05 .

  8. Derivation of Probability Density Function of Signal-to-Interference-Plus-Noise Ratio for the MS-to-MS Interference Analysis

    Directory of Open Access Journals (Sweden)

    Ho-Kyung Son

    2013-01-01

    Full Text Available This paper provides an analytical derivation of the probability density function of signal-to-interference-plus-noise ratio in the scenario where mobile stations interfere with each other. This analysis considers cochannel interference and adjacent channel interference. This could also remove the need for Monte Carlo simulations when evaluating the interference effect between mobile stations. Numerical verification shows that the analytical result agrees well with a Monte Carlo simulation. Also, we applied analytical methods for evaluating the interference effect between mobile stations using adjacent frequency bands. The analytical derivation of the probability density function can be used to provide the technical criteria for sharing a frequency band.

  9. Semi-Annihilating Wino-Like Dark Matter

    CERN Document Server

    Spray, Andrew P

    2015-01-01

    Semi-annihilation is a generic feature of dark matter theories with symmetries larger than Z_2. We explore a model based on a Z_4-symmetric dark sector comprised of a scalar singlet and a "wino"-like fermion SU(2)_L triplet. This is the minimal example of semi-annihilation with a gauge-charged fermion. We study the interplay of the Sommerfeld effect in both annihilation and semi-annihilation channels. The modifications to the relic density allow otherwise-forbidden regions of parameter space and can substantially weaken indirect detection constraints. We perform a parameter scan and find that the entire region where the model comprises all the observed dark matter is accessible to current and planned direct and indirect searches.

  10. Extracting risk neutral probability densities by fitting implied volatility smiles: Some methodological points and an application to the 3M Euribor futures option prices

    OpenAIRE

    Andersen, Allan Bødskov; Wagener, Tom

    2002-01-01

    Following Shimko (1993), a large amount of research has evolved around the problem of extracting risk neutral densities from options prices by interpolating the Balck-Scholes implied volatility smile. Some of the methods recently proposed use variants of the cubic spline. Thesee methods have the property of producing non-differentiable probability densities. We argue that this is an undesirable feature and suggest circumventing the problem by fitting a smoothing spline of higher order polynom...

  11. Biological Effectiveness of Antiproton Annihilation

    DEFF Research Database (Denmark)

    Maggiore, C.; Agazaryan, N.; Bassler, N.;

    2004-01-01

    from the annihilation of antiprotons produce an increase in ‘‘biological dose’’ in the vicinity of the narrow Bragg peak for antiprotons compared to protons. This experiment is the first direct measurement of the biological effects of antiproton annihilation. The background, description, and status...

  12. Annihilation Radiation in the Galaxy

    CERN Document Server

    Dermer, C D

    2000-01-01

    Observations of annihilation radiation in the Galaxy are briefly reviewed. We summarize astrophysical mechanisms leading to positron production, and recent estimates for production rates from nova and supernova nucleosynthesis in the Galaxy. The physical processes involved in the production of annihilation radiation in the interstellar medium are described. These include positron thermalization, charge exchange, radiative recombination, and direct annihilation. Calculations of 2\\gamma and 3\\gamma spectra and the positronium (Ps) fraction due to the annihilation of positrons in media containing H and He at different temperatures and ionization states are presented. Quenching of Ps by high temperature plasmas or dust could account for differences between 0.511 MeV and 3\\gamma Ps continuum maps. These results are presented in the context of the potential of INTEGRAL to map sites of annihilation radiation in the Galaxy. Positron production by compact objects is also considered.

  13. Annihilation radiation in the Galaxy

    Science.gov (United States)

    Dermer, C. D.; Murphy, R. J.

    2001-09-01

    Observations of annihilation radiation in the Galaxy are briefly reviewed. We summarize astrophysical mechanisms leading to positron production and recent estimates for production rates from nova and supernova nucleosynthesis in the Galaxy. The physical processes involved in the production of annihilation radiation in the interstellar medium are described. These include positron thermalization, charge exchange, radiative recombination, and direct annihilation. Calculations of 2γ and 3γ spectra and the positronium (Ps) fraction due to the annihilation of positrons in media containing H and He at different temperatures and ionization states are presented. Quenching of Ps by high temperature plasmas or dust could account for differences betwen 0.511 MeV and 3γ Ps continuum maps. These results are presented in the context of the potential of INTEGRAL to map sites of annihilation radiation in the Galaxy. Positron production by compact objects is also considered.

  14. Neutrino annihilation in hot plasma

    International Nuclear Information System (INIS)

    We consider neutrino annihilation in a heat bath, including annihilation via the photon. We show that the annihilation cross section has high and narrow peaks corresponding to a plasmon resonance. This yields an enormous enhancement factor of O(108) in the differential cross section as compared with the purely weak contribution. We also evaluate numerically the thermally averaged neutrino annihilation rate per particle in the heat bath of the early universe to be +e-)>≅2.93GF2T2. We have accounted for the final-state blocking factors as well as for the fact that the center-of-mass frame of collisions is not necessarily the rest frame of the heat bath. Despite the resonances, electromagnetic processes represent only a minor effect in the averaged annihilation rate. (orig.)

  15. Neutrino annihilation in hot plasma

    International Nuclear Information System (INIS)

    We consider neutrino annihilation in a heat bath, including annihilation via the photon. We show that the annihilation cross section has high and narrow peaks corresponding to a plasmon resonance. This yields an enormous enhancement factor of O(108) in the differential cross section as compared with the purely weak contribution. We also evaluate numerically the thermally averaged neutrino annihilation rate per particle in the heat bath of the early Universe to be +e-)> ≅ 2.93GF2T2. We have accounted for the final state blocking factors as well as for the fact that the center-of-mass frame of collisions is not necessarily the rest frame of the heat bath. Despite the resonances, electromagnetic processes represent only a minor effect in the averaged annihilation rate. (orig.)

  16. Effect of Phonotactic Probability and Neighborhood Density on Word-Learning Configuration by Preschoolers with Typical Development and Specific Language Impairment

    Science.gov (United States)

    Gray, Shelley; Pittman, Andrea; Weinhold, Juliet

    2014-01-01

    Purpose: In this study, the authors assessed the effects of phonotactic probability and neighborhood density on word-learning configuration by preschoolers with specific language impairment (SLI) and typical language development (TD). Method: One hundred thirty-one children participated: 48 with SLI, 44 with TD matched on age and gender, and 39…

  17. Monomer Migration and Annihilation Processes

    Institute of Scientific and Technical Information of China (English)

    KE Jian-Hong; LIN Zhen-Quan; ZHUANG You-Yi

    2005-01-01

    We propose a two-species monomer migration-annihilation model, in which monomer migration reactions occur between any two aggregates of the same species and monomer annihilation reactions occur between two different species. Based on the mean-field rate equations, we investigate the evolution behaviors of the processes. For the case with an annihilation rate kernel proportional to the sizes of the reactants, the aggregation size distribution of either species approaches the modified scaling form in the symmetrical initial case, while for the asymmetrical initial case the heavy species with a large initial data scales according to the conventional form and the light one does not scale. Moreover,at most one species can survive finally. For the case with aconstant annihilation rate kernel, both species may scale according to the conventional scaling law in the symmetrical case and survive together at the end.

  18. Dark Matter Annihilation at the Galactic Center

    Energy Technology Data Exchange (ETDEWEB)

    Linden, Timothy Ryan [Univ. of California, Santa Cruz, CA (United States)

    2013-06-01

    Observations by the WMAP and PLANCK satellites have provided extraordinarily accurate observations on the densities of baryonic matter, dark matter, and dark energy in the universe. These observations indicate that our universe is composed of approximately ve times as much dark matter as baryonic matter. However, e orts to detect a particle responsible for the energy density of dark matter have been unsuccessful. Theoretical models have indicated that a leading candidate for the dark matter is the lightest supersymmetric particle, which may be stable due to a conserved R-parity. This dark matter particle would still be capable of interacting with baryons via weak-force interactions in the early universe, a process which was found to naturally explain the observed relic abundance of dark matter today. These residual annihilations can persist, albeit at a much lower rate, in the present universe, providing a detectable signal from dark matter annihilation events which occur throughout the universe. Simulations calculating the distribution of dark matter in our galaxy almost universally predict the galactic center of the Milky Way Galaxy (GC) to provide the brightest signal from dark matter annihilation due to its relative proximity and large simulated dark matter density. Recent advances in telescope technology have allowed for the rst multiwavelength analysis of the GC, with suitable e ective exposure, angular resolution, and energy resolution in order to detect dark matter particles with properties similar to those predicted by the WIMP miracle. In this work, I describe ongoing e orts which have successfully detected an excess in -ray emission from the region immediately surrounding the GC, which is di cult to describe in terms of standard di use emission predicted in the GC region. While the jury is still out on any dark matter interpretation of this excess, I describe several related observations which may indicate a dark matter origin. Finally, I discuss the

  19. ATHENA: an actual antihydrogen annihilation

    CERN Multimedia

    2002-01-01

    This is an image of an actual matter-antimatter annihilation due to an atom of antihydrogen in the ATHENA experiment, located on the Antiproton Decelerator (AD) at CERN since 2001. The antiproton produces four charged pions (yellow) whose positions are given by silicon microstrips (pink) before depositing energy in CsI crystals (yellow cubes). The positron also annihilates to produce back-to-back gamma rays (red).

  20. Joint Behaviour of Semirecursive Kernel Estimators of the Location and of the Size of the Mode of a Probability Density Function

    Directory of Open Access Journals (Sweden)

    Abdelkader Mokkadem

    2011-01-01

    Full Text Available Let and denote the location and the size of the mode of a probability density. We study the joint convergence rates of semirecursive kernel estimators of and . We show how the estimation of the size of the mode allows measuring the relevance of the estimation of its location. We also enlighten that, beyond their computational advantage on nonrecursive estimators, the semirecursive estimators are preferable to use for the construction of confidence regions.

  1. Applications of slow positrons to cancer research: Search for selectivity of positron annihilation to skin cancer

    Energy Technology Data Exchange (ETDEWEB)

    Jean, Y.C. [Department of Chemistry, University of Missouri-Kansas City, 205 Spenscer Chemistry Building, 5009 Rockhill Road, Kansas City, MO 64110 (United States)]. E-mail: jeany@umkc.edu; Li Ying [Department of Chemistry, University of Missouri-Kansas City, 205 Spenscer Chemistry Building, 5009 Rockhill Road, Kansas City, MO 64110 (United States); Liu Gaung [Department of Chemistry, University of Missouri-Kansas City, 205 Spenscer Chemistry Building, 5009 Rockhill Road, Kansas City, MO 64110 (United States); Chen, Hongmin [Department of Chemistry, University of Missouri-Kansas City, 205 Spenscer Chemistry Building, 5009 Rockhill Road, Kansas City, MO 64110 (United States); Zhang Junjie [Department of Chemistry, University of Missouri-Kansas City, 205 Spenscer Chemistry Building, 5009 Rockhill Road, Kansas City, MO 64110 (United States); Gadzia, Joseph E. [Dermatology, Department of Internal Medicine, University of Kansas Medical Center, Kansas City, KS 66103 (United States); Kansas Medical Clinic, Topeka, KS 66614 (United States)

    2006-02-28

    Slow positrons and positron annihilation spectroscopy (PAS) have been applied to medical research in searching for positron annihilation selectivity to cancer cells. We report the results of positron lifetime and Doppler broadening energy spectroscopies in human skin samples with and without cancer as a function of positron incident energy (up to 8 {mu}m depth) and found that the positronium annihilates at a significantly lower rate and forms at a lower probability in the samples having either basal cell carcinoma (BCC) or squamous cell carcinoma (SCC) than in the normal skin. The significant selectivity of positron annihilation to skin cancer may open a new research area of developing positron annihilation spectroscopy as a novel medical tool to detect cancer formation externally and non-invasively at the early stages.

  2. A Herschel - SPIRE Survey of the Mon R2 Giant Molecular Cloud: Analysis of the Gas Column Density Probability Density Function

    Science.gov (United States)

    Pokhrel, R.; Gutermuth, R.; Ali, B.; Megeath, T.; Pipher, J.; Myers, P.; Fischer, W. J.; Henning, T.; Wolk, S. J.; Allen, L.; Tobin, J. J.

    2016-06-01

    We present a far-IR survey of the entire Mon R2 GMC with Herschel - SPIRE cross-calibrated with Planck - HFI data. We fit the SEDs of each pixel with a greybody function and an optimal beta value of 1.8. We find that mid-range column densities obtained from far-IR dust emission and near-IR extinction are consistent. For the entire GMC, we find that the column density histogram, or N-PDF, is lognormal below ˜ 1021 cm-2. Above this value, the distribution takes a power law form with an index of -2.15. We analyze the gas geometry, N-PDF shape, and YSO content of a selection of subregions in the cloud. We find no regions with pure lognormal N-PDFs. The regions with a combination of lognormal and one power law N-PDF have a YSO cluster and a corresponding centrally concentrated gas clump. The regions with a combination of lognormal and two power law N-PDF have significant numbers of typically younger YSOs but no prominent YSO cluster. These regions are composed of an aggregate of closely spaced gas filaments with no concentrated dense gas clump. We find that for our fixed scale regions, the YSO count roughly correlates with the N-PDF power law index. The correlation appears steeper for single power law regions relative to two power law regions with a high column density cut-off, as a greater dense gas mass fraction is achieved in the former. A stronger correlation is found between embedded YSO count and the dense gas mass among our regions.

  3. A $Herschel-SPIRE$ Survey of the Mon R2 Giant Molecular Cloud: Analysis of the Gas Column Density Probability Density Function

    CERN Document Server

    Pokhrel, R; Ali, B; Megeath, T; Pipher, J; Myers, P; Fischer, W J; Henning, T; Wolk, S J; Allen, L; Tobin, J J

    2016-01-01

    We present a far-IR survey of the entire Mon R2 GMC with $Herschel-SPIRE$ cross-calibrated with $Planck-HFI$ data. We fit the SEDs of each pixel with a greybody function and an optimal beta value of 1.8. We find that mid-range column densities obtained from far-IR dust emission and near-IR extinction are consistent. For the entire GMC, we find that the column density histogram, or N-PDF, is lognormal below $\\sim$10$^{21}$ cm$^{-2}$. Above this value, the distribution takes a power law form with an index of -2.16. We analyze the gas geometry, N-PDF shape, and YSO content of a selection of subregions in the cloud. We find no regions with pure lognormal N-PDFs. The regions with a combination of lognormal and one power law N-PDF have a YSO cluster and a corresponding centrally concentrated gas clump. The regions with a combination of lognormal and two power law N-PDF have significant numbers of typically younger YSOs but no prominent YSO cluster. These regions are composed of an aggregate of closely spaced gas fi...

  4. The influence of antioxidant on positron annihilation in polypropylene

    International Nuclear Information System (INIS)

    The purpose of this report is to check the influence of the carbonyl groups (CG), created by oxygen naturally dissolved in a polymer matrix and by the source irradiation, on annihilation characteristics of free positrons using the positron annihilation lifetime spectroscopy (PALS) and coincidence Doppler-broadening spectroscopy (CDBS). Positron annihilation in a pure polypropylene (PP) and in an antioxidant-containing polypropylene (PPA) sample at room and low temperatures has been studied by CDBS. PALS has been used as an o-Ps (orth-positronium) formation monitor. The momentum density distributions of electrons obtained by CDBS at the beginning of measurements have been compared to that at the o-Ps intensity saturation level. It has been shown that the initial concentration of carbonyl groups in a PP sample is high, while for an antioxidant-containing sample, PPA, carbonyl groups are not detected by CDBS. CDBS spectra for a PP can be explained by annihilation of free positrons with the oxygen contained in the carbonyl groups. For a PPA sample, no significant contribution of annihilation with oxygen core electrons can be concluded. (Y. Kazumata)

  5. 条件密度置信区间的覆盖精度%Coverage Accuracy of Confidence Intervals for a Conditional Probability Density Function

    Institute of Scientific and Technical Information of China (English)

    雷庆祝; 秦永松

    2007-01-01

    本文用经验似然方法讨论了条件密度的置信区间的构造.通过对覆盖概率的Edgeworth展开得到了经验似然置信区间的覆盖精度,同时证明了条件密度的经验似然置信区间的Bartlett可修正性.%Point-wise confidence intervals for a conditional probability density function are considered.The confidence intervals are based on the empirical likelihood. Their coverage accuracy is assessed by developing Edgeworth expansions for the coverage probabilities. It is shown that the empirical likelihood confidence intervals are Bartlett correctable.

  6. H2: entanglement, probability density function, confined Kratzer oscillator, universal potential and (Mexican hat- or bell-type) potential energy curves

    CERN Document Server

    Van Hooydonk, G

    2011-01-01

    We review harmonic oscillator theory for closed, stable quantum systems. The H2 potential energy curve (PEC) of Mexican hat-type, calculated with a confined Kratzer oscillator, is better than the Rydberg-Klein-Rees (RKR) H2 PEC. Compared with QM, the theory of chemical bonding is simplified, since a confined Kratzer oscillator gives the long sought for universal function, once called the Holy Grail of Molecular Spectroscopy. This is validated with HF, I2, N2 and O2 PECs. We quantify the entanglement of spatially separated H2 quantum states, which gives a braid view. The equal probability for H2, originating either from HA+HB or HB+HA, is quantified with a Gauss probability density function. At the Bohr scale, confined harmonic oscillators behave properly at all extremes of bound two-nucleon quantum systems and are likely to be useful also at the nuclear scale.

  7. Weak annihilation cusp inside the dark matter spike about a black hole

    OpenAIRE

    Shapiro, Stuart L.; Shelton, Jessie

    2016-01-01

    We reinvestigate the effect of annihilations on the distribution of collisionless dark matter (DM) in a spherical density spike around a massive black hole. We first construct a very simple, pedagogic, analytic model for an isotropic phase space distribution function that accounts for annihilation and reproduces the "weak cusp" found by Vasiliev for DM deep within the spike and away from its boundaries. The DM density in the cusp varies as $r^{-1/2}$ for $s$-wave annihilation, where $r$ is th...

  8. Sommerfeld enhancement of invisible dark matter annihilation in galaxies and galaxy clusters

    CERN Document Server

    Chan, Man Ho

    2016-01-01

    Recent observations indicate that core-like dark matter structures exist in many galaxies, while numerical simulations reveal a singular dark matter density profile at the center. In this article, I show that if the annihilation of dark matter particles gives invisible sterile neutrinos, the Sommerfeld enhancement of the annihilation cross-section can give a sufficiently large annihilation rate to solve the core-cusp problem. The resultant core density, core radius, and their scaling relation generally agree with recent empirical fits from observations. Also, this model predicts that the resultant core-like structures in dwarf galaxies can be easily observed, but not for large normal galaxies and galaxy clusters.

  9. Sommerfeld enhancement of invisible dark matter annihilation in galaxies and galaxy clusters

    Science.gov (United States)

    Chan, Man Ho

    2016-07-01

    Recent observations indicate that core-like dark matter structures exist in many galaxies, while numerical simulations reveal a singular dark matter density profile at the center. In this article, I show that if the annihilation of dark matter particles gives invisible sterile neutrinos, the Sommerfeld enhancement of the annihilation cross-section can give a sufficiently large annihilation rate to solve the core-cusp problem. The resultant core density, core radius, and their scaling relation generally agree with recent empirical fits from observations. Also, this model predicts that the resultant core-like structures in dwarf galaxies can be easily observed, but not for large normal galaxies and galaxy clusters.

  10. 基于新概率密度函数的ICA盲源分离%ICA Blind Signal Separation Based on a New Probability Density Function

    Institute of Scientific and Technical Information of China (English)

    张娟娟; 邸双亮

    2014-01-01

    This paper is concerned with the blind source separation (BSS) problem of super-Gaussian and sub-Gaussian mixed signal by using the maximum likelihood method, which is based on independent component analysis (ICA) method. In this paper, we construct a new type of probability density function (PDF) which is different from the already existing PDF used to separate mixed signals in the previously published papers. Applying the new constructed PDF to estimate probability density of super-Gaussian and sub-Gaussian signals (assuming the source signals are independent of each other), it is not necessary to change the parameter values artificially, and the separation work may be performed adaptively. Numerical experiments verify the feasibility of the newly constructed PDF, and the convergence time and the separation effect are improved compared with the original algorithm.%基于独立分量分析(Independent Component Analysis, ICA),利用极大似然估计法,研究了超高斯和亚高斯的混合信号的盲源分离(Blind Sources Separation, BSS)问题。文中构造了一种新的、不同于以往文章中用来分离混合信号的概率密度函数(Probability Density Function, PDF)。新构造的PDF无需改变函数中的参数值,可用来对于超高斯和亚高斯信号的概率密度进行估计(假设未知源信号是相互独立的)。数值实验验证了新构造的PDF的可行性,与原算法相比,收敛时间和分离效果都得到了较大的改善。

  11. Search for t bar t production in the e+jets channel using the probability density estimation (PDE) method at D0

    International Nuclear Information System (INIS)

    The authors construct probability density functions for signal and background events in multi-dimensional space, using Monte Carlo samples. A variant of the Bayes' discriminant function is then applied to classify signal and background events. The effect of some kinematic quantities on the performance of the discriminant has been studied and the results of applying the PDE method to search for the top quark in D0 data (p bar p collisions at √s = 1.8 TeV) will be presented

  12. Biological effectiveness of antiproton annihilation

    CERN Document Server

    Holzscheiter, Michael H.; Bassler, Niels; Beyer, Gerd; De Marco, John J.; Doser, Michael; Ichioka, Toshiyasu; Iwamoto, Keisuke S.; Knudsen, Helge V.; Landua, Rolf; Maggiore, Carl; McBride, William H.; Møller, Søren Pape; Petersen, Jorgen; Smathers, James B.; Skarsgard, Lloyd D.; Solberg, Timothy D.; Uggerhøj, Ulrik I.; Withers, H.Rodney; Vranjes, Sanja; Wong, Michelle; Wouters, Bradly G.

    2004-01-01

    We describe an experiment designed to determine whether or not the densely ionizing particles emanating from the annihilation of antiprotons produce an increase in “biological dose” in the vicinity of the narrow Bragg peak for antiprotons compared to protons. This experiment is the first direct measurement of the biological effects of antiproton annihilation. The experiment has been approved by the CERN Research Board for running at the CERN Antiproton Decelerator (AD) as AD-4/ACE (Antiproton Cell Experiment) and has begun data taking in June of 2003. The background, description and the current status of the experiment are given.

  13. Biological effectiveness of antiproton annihilation

    DEFF Research Database (Denmark)

    Holzscheiter, M.H.; Agazaryan, N.; Bassler, Niels;

    2004-01-01

    We describe an experiment designed to determine whether or not the densely ionizing particles emanating from the annihilation of antiprotons produce an increase in ‘‘biological dose’’ in the vicinity of the narrow Bragg peak for antiprotons compared to protons. This experiment is the first direct...... measurement of the biological effects of antiproton annihilation. The experiment has been approved by the CERN Research Board for running at the CERN Antiproton Decelerator (AD) as AD-4/ACE (Antiproton Cell Experiment) and has begun data taking in June of 2003. The background, description and the current...

  14. Positron Annihilation 3-D Momentum Spectrometry by Synchronous 2D-ACAR and DBAR

    Science.gov (United States)

    Burggraf, Larry W.; Bonavita, Angelo M.; Williams, Christopher S.; Fagan-Kelly, Stefan B.; Jimenez, Stephen M.

    2015-05-01

    A positron annihilation spectroscopy system capable of determining 3D electron-positron (e--e+) momentum densities has been constructed and tested. In this technique two opposed HPGe strip detectors measure angular coincidence of annihilation radiation (ACAR) and Doppler broadening of annihilation radiation (DBAR) in coincidence to produce 3D momentum datasets in which the parallel momentum component obtained from the DBAR measurement can be selected for annihilation events that possess a particular perpendicular momentum component observed in the 2D ACAR spectrum. A true 3D momentum distribution can also be produced. Measurement of 3-D momentum spectra in oxide materials has been demonstrated including O-atom defects in 6H SiC and silver atom substitution in lithium tetraborate crystals. Integration of the 3-D momentum spectrometer with a slow positron beam for future surface resonant annihilation spectrometry measurements will be described. Sponsorship from Air Force Office of Scientific Research

  15. Positron life time and annihilation Doppler broadening measurements on transition metal complexes

    Energy Technology Data Exchange (ETDEWEB)

    Levay, B. (Eoetvoes Lorand Tudomanyegyetem, Budapest (Hungary). Fizikai Kemiai es Radiologiai Tanszek); Varhelyi, Cs. (Babes-Bolyai Univ., Cluj (Romania)); Burger, K. (Eoetvoes Lorand Tudomanyegyetem, Budapest (Hungary). Szervetlen es Analitikai Kemiai Intezet)

    1982-01-01

    Positron life time and annihilation Doppler broadening measurements have been carried out on 44 solid coordination compounds. Several correlations have been found between the annihilation life time (tau/sub 1/) and line shape parameters (L) and the chemical structure of the compounds. Halide ligands were the most active towards positrons. This fact supports the assumption on the possible formation of (e/sup +/X/sup -/) positron-halide bound state. The life time was decreasing and the annihilation energy spectra were broadening with the increasing negative character of the halides. The aromatic base ligands affected the positron-halide interaction according to their basicity and space requirement and thus they indirectly affected the annihilation parameters, too. In the planar and tetrahedral complexes the electron density on the central met--al ion affected directly the annihilation parameters, while in the octahedral mixed complexes it had only an ind--irect effect through the polarization of the halide ligands.

  16. Ionization compression impact on dense gas distribution and star formation, Probability density functions around H ii regions as seen by Herschel

    CERN Document Server

    Tremblin, P; Minier, V; Didelon, P; Hill, T; Anderson, L D; Motte, F; Zavagno, A; André, Ph; Arzoumanian, D; Audit, E; Benedettini, M; Bontemps, S; Csengeri, T; Di Francesco, J; Giannini, T; Hennemann, M; Luong, Q Nguyen; Marston, A P; Peretto, N; Rivera-Ingraham, A; Russeil, D; Rygl, K L J; Spinoglio, L; White, G J

    2014-01-01

    Ionization feedback should impact the probability distribution function (PDF) of the column density around the ionized gas. We aim to quantify this effect and discuss its potential link to the Core and Initial Mass Function (CMF/IMF). We used in a systematic way Herschel column density maps of several regions observed within the HOBYS key program: M16, the Rosette and Vela C molecular cloud, and the RCW 120 H ii region. We fitted the column density PDFs of all clouds with two lognormal distributions, since they present a double-peak or enlarged shape in the PDF. Our interpretation is that the lowest part of the column density distribution describes the turbulent molecular gas while the second peak corresponds to a compression zone induced by the expansion of the ionized gas into the turbulent molecular cloud. The condensations at the edge of the ionized gas have a steep compressed radial profile, sometimes recognizable in the flattening of the power-law tail. This could lead to an unambiguous criterion able t...

  17. Statistically advanced, self-similar, radial probability density functions of atmospheric and under-expanded hydrogen jets

    Science.gov (United States)

    Ruggles, Adam J.

    2015-11-01

    agreement. This was attributed to the high quality of the measurements which reduced the width of the correctly identified, noise-affected pure air distribution, with respect to the turbulent mixing distribution. The ignitability of the atmospheric jet is determined using the flammability factor calculated from both kernel density estimated (KDE) PDFs and PDFs generated using the newly proposed model. Agreement between contours from both approaches is excellent. Ignitability of the under-expanded jet is also calculated using KDE PDFs. Contours are compared with those calculated by applying the atmospheric model to the under-expanded jet. Once again, agreement is excellent. This work demonstrates that self-similar scalar mixing statistics and ignitability of atmospheric jets can be accurately described by the proposed model. This description can be applied with confidence to under-expanded jets, which are more realistic of leak and fuel injection scenarios.

  18. D-brane scattering and annihilation

    CERN Document Server

    D'Amico, Guido; Kleban, Matthew; Schillo, Marjorie

    2014-01-01

    We study the dynamics of parallel brane-brane and brane-antibrane scattering in string theory in flat spacetime, focusing on the pair production of open strings that stretch between the branes. We are particularly interested in the case of scattering at small impact parameter $b < l_s$, where there is a tachyon in the spectrum when a brane and an antibrane approach within a string length. Our conclusion is that despite the tachyon, branes and antibranes can pass through each other with only a very small probability of annihilating, so long as $g_s$ is small and the relative velocity $v$ is neither too small nor too close to 1. Our analysis is relevant also to the case of charged open string production in world-volume electric fields, and we make use of this T-dual scenario in our analysis. We briefly discuss the application of our results to a stringy model of inflation involving moving branes.

  19. The Dark Matter Annihilation Boost from Low-Temperature Reheating

    CERN Document Server

    Erickcek, Adrienne L

    2015-01-01

    The evolution of the Universe between inflation and the onset of Big Bang Nucleosynthesis is difficult to probe and largely unconstrained. This ignorance profoundly limits our understanding of dark matter: we cannot calculate its thermal relic abundance without knowing when the Universe became radiation dominated. Fortunately, small-scale density perturbations provide a probe of the early Universe that could break this degeneracy. If dark matter is a thermal relic, density perturbations that enter the horizon during an early matter-dominated era grow linearly with the scale factor prior to reheating. The resulting abundance of substructure boosts the annihilation rate by several orders of magnitude, which can compensate for the smaller annihilation cross sections that are required to generate the observed dark matter density in these scenarios. In particular, thermal relics with masses less than a TeV that thermally and kinetically decouple prior to reheating may already be ruled out by Fermi-LAT observations...

  20. The Derivation of the Probability Density Function of the t Distribution%t 分布概率密度的分析

    Institute of Scientific and Technical Information of China (English)

    彭定忠; 张映辉; 刘朝才

    2012-01-01

      t 分布是数理统计中应用广泛的3个重要分布之一,大多数教材没有或仅用直接法推导其概率密度,本文采用变换法推导,简化了运算过程,降低了计算难度。%  The t distribution is one of three most important distributions which are applied widely in mathematically statistical analysis, most of the teaching material not including or only use the direct method to derivate the probability density function of the distribution. In this paper, the transform method which features a more simple operation and less difficult computation is presented for derivation.

  1. Recursive Kernel Estimation of Probability Density Function with Validation Data%核实数据下的递归核密度估计

    Institute of Scientific and Technical Information of China (English)

    宇世航; 赵世舜

    2012-01-01

    基于替代与核实数据样本下的总体密度函数估计问题,定义一个递归型核密度的估计量,它包含替代数据和核实数据两种信息,并证明了该估计量的渐近正态性.模拟结果表明:给定样本总数N的情况下,模拟效果随核实数据样本容量n的增加而渐好;当固定核实数据样本容量n时,顶部随样本总量N的增加模拟效果渐好,尾部变差;如果同时增大N和n,模拟结果更趋近于f(x),并且也更平滑.%In consideration of the probability density estimation problem with surrogate and validation data, a recursive kernel estimation of probability density function is so defined to comprise both surrogate and validation variates that the proposed estimators are proved to be asymptotically normal. The simulation results indicate at a given constant of N, the total number of data, the method performs better as the validation variate n increases. Also, for a given n, simulation result becomes better in terms of top as N increases, but becomes bad in terms of tail. We also noted that the simulation result, as N and n together increases, more approaches the f(x) and is smoothing.

  2. Impact of distributed generation in the probability density of voltage sags; Impacto da geracao distribuida na densidade de probabilidade de afundamentos de tensao

    Energy Technology Data Exchange (ETDEWEB)

    Ramos, Alessandro Candido Lopes [CELG - Companhia Energetica de Goias, Goiania, GO (Brazil). Generation and Transmission. System' s Operation Center], E-mail: alessandro.clr@celg.com.br; Batista, Adalberto Jose [Universidade Federal de Goias (UFG), Goiania, GO (Brazil)], E-mail: batista@eee.ufg.br; Leborgne, Roberto Chouhy [Universidade Federal do Rio Grande do Sul (UFRS), Porto Alegre, RS (Brazil)], E-mail: rcl@ece.ufrgs.br; Emiliano, Pedro Henrique Mota, E-mail: ph@phph.com.br

    2009-07-01

    This article presents the impact of distributed generation in studies of voltage sags caused by faults in the electrical system. We simulated short-circuit-to-ground in 62 lines of 230, 138, 69 and 13.8 kV that are part of the electrical system of the city of Goiania, of Goias state . For each fault position was monitored the bus voltage of 380 V in an industrial consumer sensitive to such sags. Were inserted different levels of GD near the consumer. The simulations of a short circuit, with the monitoring bar 380 V, were performed again. A study using stochastic simulation Monte Carlo (SMC) was performed to obtain, at each level of GD, the probability curves and sags of the probability density and its voltage class. With these curves were obtained the average number of sags according to each class, that the consumer bar may be submitted annually. The simulations were performed using the Program Analysis of Simultaneous Faults - ANAFAS. In order to overcome the intrinsic limitations of the methods of simulation of this program and allow data entry via windows, a computational tool was developed in Java language. Data processing was done using the MATLAB software.

  3. Effect of positron-atom interactions on the annihilation gamma spectra of molecules

    CERN Document Server

    Green, D G; Wang, F; Gribakin, G F; Surko, C M

    2012-01-01

    Calculations of gamma spectra for positron annihilation on a selection of molecules, including methane and its fluoro-substitutes, ethane, propane, butane and benzene are presented. The annihilation gamma spectra characterise the momentum distribution of the electron-positron pair at the instant of annihilation. The contribution to the gamma spectra from individual molecular orbitals is obtained from electron momentum densities calculated using modern computational quantum chemistry density functional theory tools. The calculation, in its simplest form, effectively treats the low-energy (thermalised, room-temperature) positron as a plane wave and gives annihilation gamma spectra that are about 40% broader than experiment, although the main chemical trends are reproduced. We show that this effective "narrowing" of the experimental spectra is due to the action of the molecular potential on the positron, chiefly, due to the positron repulsion from the nuclei. It leads to a suppression of the contribution of smal...

  4. New Limits on Thermally Annihilating Dark Matter from Neutrino Telescopes

    Science.gov (United States)

    Lopes, J.; Lopes, I.

    2016-08-01

    We used a consistent and robust solar model to obtain upper limits placed by neutrino telescopes, such as IceCube and Super-Kamiokande, on the dark matter-nucleon scattering cross-section, for a general model of dark matter with a velocity dependent (p-wave) thermally averaged cross-section. In this picture, the Boltzmann equation for the dark matter abundance is numerically solved, satisfying the dark matter density measured from the cosmic microwave background. We show that for lower cross-sections and higher masses, the dark matter annihilation rate drops sharply, resulting in upper bounds on the scattering cross-section that are one order of magnitude above those derived from a velocity independent (s-wave) annihilation cross-section. Our results show that upper limits on the scattering cross-section obtained from dark matter annihilating in the Sun are sensible to the uncertainty in current standard solar models, fluctuating by a maximum of 20% depending on the annihilation channel.

  5. New Limits on Thermally annihilating Dark Matter from Neutrino Telescopes

    CERN Document Server

    Lopes, José

    2016-01-01

    We used a consistent and robust solar model to obtain upper limits placed by neutrino telescopes, such as Ice- Cube and Super-Kamiokande, on the Dark Matter-nucleon scattering cross-section, for a general model of Dark Matter with a velocity dependent (p-wave) thermally averaged cross-section. In this picture, the Boltzmann equation for the Dark Matter abundance is numerically solved satisfying the Dark Matter density measured from the Cosmic Microwave Background (CMB). We show that for lower cross-sections and higher masses, the Dark Matter annihilation rate drops sharply, resulting in upper bounds on the scattering cross-section one order of magnitude above those derived from a velocity independent (s-wave) annihilation cross-section. Our results show that upper limits on the scattering cross-section obtained from Dark Matter annihilating in the Sun are sensible to the uncertainty in current standard solar models, fluctuating a maximum of 20 % depending on the annihilation channel.

  6. Moments of the Hilbert-Schmidt probability distributions over determinants of real two-qubit density matrices and of their partial transposes

    CERN Document Server

    Slater, Paul B

    2010-01-01

    The nonnegativity of the determinant of the partial transpose of a two-qubit (4 x 4) density matrix is both a necessary and sufficient condition for its separability. While the determinant is restricted to the interval [0,1/256], the determinant of the partial transpose can range over [-1/16,1/256], with negative values corresponding to entangled states. We report here the exact values of the first nine moments of the probability distribution of the partial transpose over this interval, with respect to the Hilbert-Schmidt (metric volume element) measure on the nine-dimensional convex set of real two-qubit density matrices. Rational functions C_{2 j}(m), yielding the coefficients of the 2j-th power of even polynomials occurring at intermediate steps in our derivation of the m-th moment, emerge. These functions possess poles at finite series of consecutive half-integers (m=-3/2,-1/2,...,(2j-1)/2), and certain (trivial) roots at finite series of consecutive natural numbers (m=0, 1,...). Additionally, the (nontri...

  7. A Comprehensive Search for Dark Matter Annihilation in Dwarf Galaxies

    CERN Document Server

    Geringer-Sameth, Alex; Walker, Matthew G

    2014-01-01

    We present a new formalism designed to discover dark matter annihilation occurring in the Milky Way's dwarf galaxies. The statistical framework extracts all available information in the data by simultaneously combining observations of all the dwarf galaxies and incorporating the impact of particle physics properties, the distribution of dark matter in the dwarfs, and the detector response. The method performs maximally powerful frequentist searches and produces confidence limits on particle physics parameters. Probability distributions of test statistics under various hypotheses are constructed exactly, without relying on large sample approximations. The derived limits have proper coverage by construction and claims of detection are not biased by imperfect background modeling. We implement this formalism using data from the Fermi Gamma-ray Space Telescope to search for an annihilation signal in the complete sample of Milky Way dwarfs whose dark matter distributions can be reliably determined. We find that the...

  8. Deduction and Validation of an Eulerian-Eulerian Model for Turbulent Dilute Two-Phase Flows by Means of the Phase Indicator Function Disperse Elements Probability Density Function

    Institute of Scientific and Technical Information of China (English)

    SantiagoLain; RicardoAliod

    2000-01-01

    A statistical formalism overcoming some conceptual and practical difficulties arising in existing two-phase flow (2PHF) mathematical modelling has been applied to propose a model for dilute 2PHF turbulent flows.Phase interaction terms with a clear physical meaning enter the equations and the formalism provides some guidelines for the avoidance of closure assumptions or the rational approximation of these terms. Continuous phase averaged continuity, momentum, turbulent kinetic energy and turbulence dissipation rate equations have been rigorously and systematically obtained in a single step. These equations display a structure similar to that for single-phase flows.It is also assumed that dispersed phase dynamics is well described by a probability density function (pdf) equation and Eulerian continuity, momentum and fluctuating kinetic energy equations for the dispersed phase are deduced.An extension of the standard k-c turbulence model for the continuous phase is used. A gradient transport model is adopted for the dispersed phase fluctuating fluxes of momentum and kinetic energy at the non-colliding, large inertia limit. This model is then used to predict the behaviour of three axisymmetric turbulent jets of air laden with solid particles varying in size and concentration. Qualitative and quantitative numerical predictions compare reasonably well with the three different sets of experimental results, studying the influence of particle size, loading ratio and flow confinement velocity.

  9. 概率密度函数法研究重构吸引子的结构%Probability Density Function Method for Observing Reconstructed Attractor Structure

    Institute of Scientific and Technical Information of China (English)

    陆宏伟; 陈亚珠; 卫青

    2004-01-01

    Probability density function (PDF) method is proposed for analysing the structure of the reconstructed attractor in computing the correlation dimensions of RR intervals of ten normal old men.PDF contains important information about the spatial distribution of the phase points in the reconstructed attractor.To the best of our knowledge, it is the first time that the PDF method is put forward for the analysis of the reconstructed attractor structure.Numerical simulations demonstrate that the cardiac systems of healthy old men are about 6-6.5 dimensional complex dynamical systems.It is found that PDF is not symmetrically distributed when time delay is small, while PDF satisfies Gaussian distribution when time delay is big enough.A cluster effect mechanism is presented to explain this phenomenon.By studying the shape of PDFs, that the roles played by time delay are more important than embedding dimension in the reconstruction is clearly indicated.Results have demonstrated that the PDF method represents a promising numerical approach for the observation of the reconstructed attractor structure and may provide more information and new diagnostic potential of the analyzed cardiac system.

  10. Onset of exciton-exciton annihilation in single-layer black phosphorus

    Science.gov (United States)

    Surrente, A.; Mitioglu, A. A.; Galkowski, K.; Klopotowski, L.; Tabis, W.; Vignolle, B.; Maude, D. K.; Plochocka, P.

    2016-08-01

    The exciton dynamics in monolayer black phosphorus is investigated over a very wide range of photoexcited exciton densities using time resolved photoluminescence. At low excitation densities, the exciton dynamics is successfully described in terms of a double exponential decay. With increasing exciton population, a fast, nonexponential component develops as exciton-exciton annihilation takes over as the dominant recombination mechanism under high excitation conditions. Our results identify an upper limit for the injection density, after which exciton-exciton annihilation reduces the quantum yield, which will significantly impact the performance of light emitting devices based on single-layer black phosphorus.

  11. Probability density function (Pdf) of daily rainfall depths by means of superstatistics of hydro-climatic fluctuations for African test cities

    Science.gov (United States)

    Topa, M. E.; De Paola, F.; Giugni, M.; Kombe, W.; Touré, H.

    2012-04-01

    The dynamic of hydro-climatic processes can fluctuate in a wide range of temporal scales. Such fluctuations are often unpredictable for ecosystems and the adaptation to these represent the great challenge for the survival and the stability of the species. An unsolved issue is how much these fluctuations of climatic variables to different temporal scales can influence the frequency and the intensity of the extreme events, and how much these events can modify the ecosystems life. It is by now widespread that an increment of the frequency and the intensity of the extreme events will represent one of the strongest characteristic of the global climatic change, with the greatest social and biotics implications (Porporato et al 2006). Recent field experiments (Gutshick and BassiriRad, 2003) and numerical analysis (Porporato et al 2004) have shown that the extreme events can generate not negligible consequences on organisms of water-limited ecosystems. Adaptation measures and species and ecosystems answers to the hydro-climatic variations, is therefore srongly interconnected to the probabilistic structure of these fluctuations. Generally the not-linear intermittent dynamic of a state variable z (a rainfall depth or the interarrival time between two storms), at short time scales (for example daily) is described by a probability density function (pdf), p (z|υ), where υ is the parameter of the distribution. If the same parameter υ varies so that the external forcing fluctuates at longer temporal scale, z reaches a new "local" equilibrium. When the temporal scale of the variation of υ is larger than the one of z, the probability distribution of z can be obtained as a overlapping of the temporary equlibria ("Superstatistic" approach), i.e.: p(z) = ∫ p(z|υ)·φ(υ)dυ (1) where p(z|υ) is the conditioned probability of z to υ, while φ(υ) is the pdf of υ (Beck, 2001; Benjamin and Cornell, 1970). The present work, carried out within FP7-ENV-2010 CLUVA (CLimate Change

  12. Probability in quantum mechanics

    Directory of Open Access Journals (Sweden)

    J. G. Gilson

    1982-01-01

    Full Text Available By using a fluid theory which is an alternative to quantum theory but from which the latter can be deduced exactly, the long-standing problem of how quantum mechanics is related to stochastic processes is studied. It can be seen how the Schrödinger probability density has a relationship to time spent on small sections of an orbit, just as the probability density has in some classical contexts.

  13. High nuclear temperatures by antimatter-matter annihilation

    International Nuclear Information System (INIS)

    It is suggested that the quark-gluon phase be created through the use of antiproton or antideuteron beams. The first advantage to this method, using higher energy antiprotons than 1.5 GeV/c, is that the higher momenta antiprotons penetrate more deeply so that mesons produced are more nearly contained within the nucleus. Another advantage is that the annihilation products are very forward-peaked and tend to form a beam of mesons so that the energy density does not disperse very rapidly. Calculations were performed using the intranuclear cascade to try to follow the process of annihilation in some detail. The intranuclear cascade type calculation method is compared to the hydrodynamic approach. 8 refs., 8 figs

  14. Dark Stars and Boosted Dark Matter Annihilation Rates

    CERN Document Server

    Ilie, Cosmin; Spolyar, Douglas

    2010-01-01

    Dark Stars (DS) may constitute the first phase of stellar evolution, powered by dark matter (DM) annihilation. We will investigate here the properties of DS assuming the DM particle has the required properties to explain the excess positron and elec- tron signals in the cosmic rays detected by the PAMELA and FERMI satellites. Any possible DM interpretation of these signals requires exotic DM candidates, with an- nihilation cross sections a few orders of magnitude higher than the canonical value required for correct thermal relic abundance for Weakly Interacting Dark Matter can- didates; additionally in most models the annihilation must be preferentially to lep- tons. Secondly, we study the dependence of DS properties on the concentration pa- rameter of the initial DM density profile of the halos where the first stars are formed. We restrict our study to the DM in the star due to simple (vs. extended) adiabatic contraction and minimal (vs. extended) capture; this simple study is sufficient to illustrate depend...

  15. High nuclear temperatures by antimatter-matter annihilation

    Energy Technology Data Exchange (ETDEWEB)

    Gibbs, W.R.; Strottman, D.

    1985-01-01

    It is suggested that the quark-gluon phase be created through the use of antiproton or antideuteron beams. The first advantage to this method, using higher energy antiprotons than 1.5 GeV/c, is that the higher momenta antiprotons penetrate more deeply so that mesons produced are more nearly contained within the nucleus. Another advantage is that the annihilation products are very forward-peaked and tend to form a beam of mesons so that the energy density does not disperse very rapidly. Calculations were performed using the intranuclear cascade to try to follow the process of annihilation in some detail. The intranuclear cascade type calculation method is compared to the hydrodynamic approach. 8 refs., 8 figs. (LEW)

  16. Annihilating dark matter and the galactic positron excess

    CERN Document Server

    Maor, I

    2006-01-01

    The possibility that the Galactic dark matter is composed of neutralinos that are just above half the $Z^o$ mass is examined, in the context of the Galactic positron excess. In particular, we check if the anomalous bump in the cosmic ray positron to electron ratio at $10~GeV$ can be explained with the ``decay'' of virtual $Z^o$ bosons produced when the neutralinos annihilate. We find that the low energy behaviour of our prediction fits well the existing data. Assuming the neutralinos annihilate primarily in the distant density concentration in the Galaxy and allowing combination of older, diffused positrons with young free-streaming ones, produces a fit which is not satisfactory on its own but is significantly better than the one obtained with homogeneous injection.

  17. Positron Annihilation in the Bipositronium Ps2

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Frolov, Alexei M.

    2005-07-01

    The electron-positron-pair annihilation in the bipositronium PS2 is considered. In particular, the two-, three-, one- and zero-photon annihilation rates are determined to high accuracy. The corresponding analytical expressions are also presented. Also, a large number of bound state properties have been determined for this system.

  18. Gas Permeations Studied by Positron Annihilation

    Science.gov (United States)

    Yuan, Jen-Pwu; Cao, Huimin; Jean, X.; Yang, Y. C.

    1997-03-01

    The hole volumes and fractions of PC and PET polymers are measured by positron annihilation lifetime spectroscopy. Direct correlations between the measured hole properties and gas permeabilities are observed. Applications of positron annihilation spectroscopy to study gas transport and separation of polymeric materials will be discussed.

  19. Fermionic Semi-Annihilating Dark Matter

    CERN Document Server

    Cai, Yi

    2015-01-01

    Semi-annihilation is a generic feature of dark matter theories with symmetries larger than Z2. We investigate two examples with multi-component dark sectors comprised of an SU(2)L singlet or triplet fermion besides a scalar singlet. These are respectively the minimal fermionic semi-annihilating model, and the minimal case for a gauge-charged fermion. We study the relevant dark matter phenomenology, including the interplay of semi-annihilation and the Sommerfeld effect. We demonstrate that semi-annihilation in the singlet model can explain the gamma ray excess from the galactic center. For the triplet model we scan the parameter space, and explore how signals and constraints are modified by semi-annihilation. We find that the entire region where the model comprises all the observed dark matter is accessible to current and planned direct and indirect searches.

  20. Skyrmion creation and annihilation by spin waves

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yizhou, E-mail: yliu062@ucr.edu; Yin, Gen; Lake, Roger K., E-mail: rlake@ece.ucr.edu [Department of Electrical and Computer Engineering, University of California, Riverside, California 92521 (United States); Zang, Jiadong [Department of Physics and Material Science Program, University of New Hampshire, Durham, New Hampshire 03824 (United States); Shi, Jing [Department of Physics and Astronomy, University of California, Riverside, California 92521 (United States)

    2015-10-12

    Single skyrmion creation and annihilation by spin waves in a crossbar geometry are theoretically analyzed. A critical spin-wave frequency is required both for the creation and the annihilation of a skyrmion. The minimum frequencies for creation and annihilation are similar, but the optimum frequency for creation is below the critical frequency for skyrmion annihilation. If a skyrmion already exists in the cross bar region, a spin wave below the critical frequency causes the skyrmion to circulate within the central region. A heat assisted creation process reduces the spin-wave frequency and amplitude required for creating a skyrmion. The effective field resulting from the Dzyaloshinskii-Moriya interaction and the emergent field of the skyrmion acting on the spin wave drive the creation and annihilation processes.

  1. Influence of Disassociation Probability on External Quantum Efficiency in Organic Electrophosphorescent Devices

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jian-hua; OU YANG Jun; LI Xue-yong; LI Hong-jian

    2007-01-01

    An analytical model is presented to calculate the disassociation probability and the external quantum efficiency at high field in doped organic electrophosphorescence(EPH) devices. The charge recombination process and the triplet(T)-triplet(T) annihilation processes are taken into account in this model. The influences of applied voltage and the thickness of the device on the disassociation probability, and of current density and the thickness of the device on the external quantum efficiency are studied thoroughly by including and ignoring the disassociation of excitons. It is found that the dissociation probability of excitons will come close to 1 at high electric field, and the external EPH quantum efficiency is almost the same at low electric field. There is a large discrepancy of the external EPH quantum efficiency at high electric field for including or ignoring the disassociation of excitons.

  2. AC quantum efficiency harmonic analysis of exciton annihilation in organic light emitting diodes (Presentation Recording)

    Science.gov (United States)

    Giebink, Noel C.

    2015-10-01

    Exciton annihilation processes impact both the lifetime and efficiency roll-off of organic light emitting diodes (OLEDs), however it is notoriously difficult to identify the dominant mode of annihilation in operating devices (exciton-exciton vs. exciton-charge carrier) and subsequently to disentangle its magnitude from competing roll-off processes such as charge imbalance. Here, we introduce a simple analytical method to directly identify and extract OLED annihilation rates from standard light-current-voltage (LIV) measurement data. The foundation of this approach lies in a frequency domain EQE analysis and is most easily understood in analogy to impedance spectroscopy, where in this case both the current (J) and electroluminescence intensity (L) are measured using a lock-in amplifier at different harmonics of the sinusoidal dither superimposed on the DC device bias. In the presence of annihilation, the relationship between recombination current and light output (proportional to exciton density) becomes nonlinear, thereby mixing the different EQE harmonics in a manner that depends uniquely on the type and magnitude of annihilation. We derive simple expressions to extract different annihilation rate coefficients and apply this technique to a variety of OLEDs. For example, in devices dominated by triplet-triplet annihilation, the annihilation rate coefficient, K_TT, is obtained directly from the linear slope that results from plotting EQE_DC-EQE_1ω versus L_DC (2EQE_1ω-EQE_DC). We go on to show that, in certain cases it is sufficient to calculate EQE_1ω directly from the slope of the DC light versus current curve [i.e. via (dL_DC)/(dJ_DC )], thus enabling this analysis to be conducted solely from common LIV measurement data.

  3. Size-dependent momentum smearing effect of positron annihilation radiation in embedded nano Cu clusters

    International Nuclear Information System (INIS)

    Momentum density distributions determined by the analysis of positron annihilation radiation in embedded nano Cu clusters in iron were studied by using a first-principles method. A momentum smearing effect originated from the positron localization in the embedded clusters is observed. The smearing effect is found to scale linearly with the cube root of the cluster's volume, indicating that the momentum density techniques of positron annihilation can be employed to explore the atomic-scaled microscopic structures of a variety of impurity aggregations in materials.

  4. Positron annihilation in PI189 and PI304 polyimides

    Energy Technology Data Exchange (ETDEWEB)

    Shantarovich, V.P. [N.N. Semenov Institute of Chemical Physics, Russian Academy of Sciences, ul Kosygina 4 st., 119334 Moscow (Russian Federation)]. E-mail: shant@center.chph.ras.ru; Suzuki, T. [High Energy Accelerator Research Organization KEK, Tsukuba 305-0801 (Japan); He, C. [High Energy Accelerator Research Organization KEK, Tsukuba 305-0801 (Japan); Ito, Y. [Reasearch Center for Nuclear Science and Technology, The University of Tokyo, Tokai, Ibaraki 319-1106 (Japan); Yampolskii, Y.P. [A.V. Topchiev Institute of Petrochemical Synthesis, Russian Academy of Sciences, 29 Leninskii Pr., 117912 Moscow (Russian Federation); Alentiev, A.Yu. [A.V. Topchiev Institute of Petrochemical Synthesis, Russian Academy of Sciences, 29 Leninskii Pr., 117912 Moscow (Russian Federation)

    2005-05-01

    Temperature dependence of the lifetime {tau}3 and intensity I{sub 3} of the long-lived ortho-positronium (o-Ps) component was measured for two polyimides PI189 and PI304 both below and above glass-transition temperatures Tg of these polymers. First heating runs of the experiments revealed anomalous, irregular behavior of the lifetime {tau}3 in both PI in the vicinity (below) of the glass transition temperature. The effect was similar to that discussed recently for a number of PI. However, on the cooling stage of the first cycle and on the heating run of the second cycle, such irregularities disappeared. These results show that anomalous behavior of annihilation characteristics of o-Ps in our PI samples were due not to anomalous behavior of PI structure itself close to Tg point (not to a specific phase transition), but to removal of residual solvent in vicinity of Tg during the first heating cycle. Different approaches to estimations of the specific hole volume and of the holes number density N on the basis of positron annihilation data are discussed. Final estimation for PI189 gives the fractional free volume h=3.35% and N=0.44x1027m-3. The effects of positron trapping by polar-CO groups on annihilation characteristics of PI and on the obtained value of N are also considered.

  5. Quantum probability

    CERN Document Server

    Gudder, Stanley P

    2014-01-01

    Quantum probability is a subtle blend of quantum mechanics and classical probability theory. Its important ideas can be traced to the pioneering work of Richard Feynman in his path integral formalism.Only recently have the concept and ideas of quantum probability been presented in a rigorous axiomatic framework, and this book provides a coherent and comprehensive exposition of this approach. It gives a unified treatment of operational statistics, generalized measure theory and the path integral formalism that can only be found in scattered research articles.The first two chapters survey the ne

  6. Ruin probabilities

    DEFF Research Database (Denmark)

    Asmussen, Søren; Albrecher, Hansjörg

    , extensions of the classical compound Poisson model to allow for reserve-dependent premiums, Markov-modulation, periodicity, change of measure techniques, phase-type distributions as a computational vehicle and the connection to other applied probability areas, like queueing theory. In this substantially......The book gives a comprehensive treatment of the classical and modern ruin probability theory. Some of the topics are Lundberg's inequality, the Cramér-Lundberg approximation, exact solutions, other approximations (e.g., for heavy-tailed claim size distributions), finite horizon ruin probabilities...... updated and extended second version, new topics include stochastic control, fluctuation theory for Levy processes, Gerber–Shiu functions and dependence....

  7. Probability-1

    CERN Document Server

    Shiryaev, Albert N

    2016-01-01

    This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, the measure-theoretic foundations of probability theory, weak convergence of probability measures, and the central limit theorem. Many examples are discussed in detail, and there are a large number of exercises. The book is accessible to advanced undergraduates and can be used as a text for independent study. To accommodate the greatly expanded material in the third edition of Probability, the book is now divided into two volumes. This first volume contains updated references and substantial revisions of the first three chapters of the second edition. In particular, new material has been added on generating functions, the inclusion-exclusion principle, theorems on monotonic classes (relying on a detailed treatment of “π-λ” systems), and the fundamental theorems of mathematical statistics.

  8. Nonabelian dark matter with resonant annihilation

    International Nuclear Information System (INIS)

    We construct a model based on an extra gauge symmetry, SU(2)X×U(1)B−L, which can provide gauge bosons to serve as weakly-interacting massive particle dark matter. The stability of the dark matter is naturally guaranteed by a discrete Z2 symmetry that is a subgroup of SU(2)X. The dark matter interacts with standard model fermions by exchanging gauge bosons which are linear combinations of SU(2)X×U(1)B−L gauge bosons. With the appropriate choice of representation for the new scalar multiplet whose vacuum expectation value spontaneously breaks the SU(2)X symmetry, the relation between the new gauge boson masses can naturally lead to resonant pair annihilation of the dark matter. After exploring the parameter space of the new gauge couplings subject to constraints from collider data and the observed relic density, we use the results to evaluate the cross section of the dark matter scattering off nucleons and compare it with data from the latest direct detection experiments. We find allowed parameter regions that can be probed by future direct searches for dark matter and LHC searches for new particles

  9. Muon Fluxes From Dark Matter Annihilation

    CERN Document Server

    Erkoca, Arif Emre; Sarcevic, Ina

    2009-01-01

    We calculate the muon flux from annihilation of the dark matter in the core of the Sun, in the core of the Earth and from cosmic diffuse neutrinos produced in dark matter annihilation in the halos. We consider model-independent direct neutrino production and secondary neutrino production from the decay of taus produced in the annihilation of dark matter. We illustrate how muon energy distribution from dark matter annihilation has a very different shape than muon flux from atmospheric neutrinos. We consider both the upward muon flux, when muons are created in the rock below the detector, and the contained flux when muons are created in the (ice) detector. We contrast our results to the ones previously obtained in the literature, illustrating the importance of properly treating muon propagation and energy loss. We comment on neutrino flavor dependence and their detection.

  10. Compton Scattering, Pair Annihilation and Pair Production in a Plasma

    OpenAIRE

    Krishan, Vinod

    1999-01-01

    The square of the four momentum of a photon in vacuum is zero. However, in an unmagnetized plasma it is equal to the square of the plasma frequency. Further, the electron-photon coupling vertex is modified in a plasma to include the effect of the plasma medium. I calculate the cross sections of the three processes - the Compton scattering, electron-positron pair annihilation and production in a plasma. At high plasma densities, the cross sections are found to change significantly. Such high p...

  11. Risk Probabilities

    DEFF Research Database (Denmark)

    Rojas-Nandayapa, Leonardo

    Tail probabilities of sums of heavy-tailed random variables are of a major importance in various branches of Applied Probability, such as Risk Theory, Queueing Theory, Financial Management, and are subject to intense research nowadays. To understand their relevance one just needs to think...... of insurance companies facing losses due to natural disasters, banks seeking protection against huge losses, failures in expensive and sophisticated systems or loss of valuable information in electronic systems. The main difficulty when dealing with this kind of problems is the unavailability of a closed...

  12. The dark matter annihilation boost from low-temperature reheating

    Science.gov (United States)

    Erickcek, Adrienne L.

    2015-11-01

    The evolution of the Universe between inflation and the onset of big bang nucleosynthesis is difficult to probe and largely unconstrained. This ignorance profoundly limits our understanding of dark matter: we cannot calculate its thermal relic abundance without knowing when the Universe became radiation dominated. Fortunately, small-scale density perturbations provide a probe of the early Universe that could break this degeneracy. If dark matter is a thermal relic, density perturbations that enter the horizon during an early matter-dominated era grow linearly with the scale factor prior to reheating. The resulting abundance of substructure boosts the annihilation rate by several orders of magnitude, which can compensate for the smaller annihilation cross sections that are required to generate the observed dark matter density in these scenarios. In particular, thermal relics with masses less than a TeV that thermally and kinetically decouple prior to reheating may already be ruled out by Fermi-LAT observations of dwarf spheroidal galaxies. Although these constraints are subject to uncertainties regarding the internal structure of the microhalos that form from the enhanced perturbations, they open up the possibility of using gamma-ray observations to learn about the reheating of the Universe.

  13. Particle creation and annihilation at interior boundaries: One-dimensional models

    CERN Document Server

    Keppeler, Stefan

    2015-01-01

    We describe creation and annihilation of particles at external sources in one spatial dimension in terms of interior-boundary conditions (IBCs). We derive explicit solutions for spectra, (generalised) eigenfunctions, as well as Green functions, spectral determinants, and integrated spectral densities. Moreover, we introduce a quantum graph version of IBC-Hamiltonians.

  14. Particle creation and annihilation at interior boundaries: one-dimensional models

    Science.gov (United States)

    Keppeler, Stefan; Sieber, Martin

    2016-03-01

    We describe creation and annihilation of particles at external sources in one spatial dimension in terms of interior-boundary conditions (IBCs). We derive explicit solutions for spectra, (generalised) eigenfunctions, as well as Green functions, spectral determinants, and integrated spectral densities. Moreover, we introduce a quantum graph version of IBC-Hamiltonians.

  15. Positron annihilation lifetime characterization of oxygen ion irradiated rutile TiO2

    Science.gov (United States)

    Luitel, Homnath; Sarkar, A.; Chakrabarti, Mahuya; Chattopadhyay, S.; Asokan, K.; Sanyal, D.

    2016-07-01

    Ferromagnetic ordering at room temperature has been induced in rutile phase of TiO2 polycrystalline sample by O ion irradiation. 96 MeV O ion induced defects in rutile TiO2 sample has been characterized by positron annihilation spectroscopic techniques. Positron annihilation results indicate the formation of cation vacancy (VTi, Ti vacancy) in these irradiated TiO2 samples. Ab initio density functional theoretical calculations indicate that in TiO2 magnetic moment can be induced either by creating Ti or O vacancies.

  16. Decaying vs annihilating dark matter in light of a tentative gamma-ray line

    Energy Technology Data Exchange (ETDEWEB)

    Buchmueller, Wilfried; Garny, Mathias

    2012-06-15

    Recently reported tentative evidence for a gamma-ray line in the Fermi-LAT data is of great potential interest for identifying the nature of dark matter. We compare the implications for decaying and annihilating dark matter taking the constraints from continuum gamma-rays, antiproton flux and morphology of the excess into account. We find that higgsino and wino dark matter are excluded, also for nonthermal production. Generically, the continuum gamma-ray ux severely constrains annihilating dark matter. Consistency of decaying dark matter with the spatial distribution of the Fermi-LAT excess would require an enhancement of the dark matter density near the Galactic center.

  17. Application of tests of goodness of fit in determining the probability density function for spacing of steel sets in tunnel support system

    Directory of Open Access Journals (Sweden)

    Farnoosh Basaligheh

    2015-12-01

    Full Text Available One of the conventional methods for temporary support of tunnels is to use steel sets with shotcrete. The nature of a temporary support system demands a quick installation of its structures. As a result, the spacing between steel sets is not a fixed amount and it can be considered as a random variable. Hence, in the reliability analysis of these types of structures, the selection of an appropriate probability distribution function of spacing of steel sets is essential. In the present paper, the distances between steel sets are collected from an under-construction tunnel and the collected data is used to suggest a proper Probability Distribution Function (PDF for the spacing of steel sets. The tunnel has two different excavation sections. In this regard, different distribution functions were investigated and three common tests of goodness of fit were used for evaluation of each function for each excavation section. Results from all three methods indicate that the Wakeby distribution function can be suggested as the proper PDF for spacing between the steel sets. It is also noted that, although the probability distribution function for two different tunnel sections is the same, the parameters of PDF for the individual sections are different from each other.

  18. Revisiting the constraints on annihilating dark matter by the radio observational data of M31

    Science.gov (United States)

    Chan, Man Ho

    2016-07-01

    Recent gamma-ray observations and radio observations put strong constraints on the parameters of dark matter annihilation. In this article, we derive new constraints for six standard model annihilation channels by using the recent radio data of the M31 galaxy. The new constraints are generally tighter than the constraints obtained from 6 years of Fermi Large Area Telescope gamma-ray observations of the Milky Way dwarf spheroidal satellite galaxies. The conservative lower limits of dark matter mass annihilating via b b ¯, μ+μ- and τ+τ- channels are 90, 90 and 80 GeV respectively with the canonical thermal relic cross section and the Burkert profile being the dark matter density profile. Hence, our results do not favor the most popular models of the dark matter interpretation of the Milky Way GeV gamma-ray excess.

  19. Revisiting the constraints on annihilating dark matter by radio observational data of M31

    CERN Document Server

    Chan, Man Ho

    2016-01-01

    Recent gamma-ray observations and radio observations put strong constraints on the parameters of dark matter annihilation. In this article, we derive new constraints for six standard model annihilation channels by using the recent radio data of M31 galaxy. The new constraints are generally tighter than the constraints obtained from 6 years of Fermi Large Area Telescope gamma-ray observations of the Milky Way dwarf spheroidal satellite galaxies. The conservative lower limits of dark matter mass annihilating via $b\\bar{b}$, $\\mu^+\\mu^-$ and $\\tau^+\\tau^-$ channels are 90 GeV, 90 GeV and 80 GeV respectively with the canonical thermal relic cross section and the Burkert profile being the dark matter density profile. Hence, our results do not favor the most popular models of the dark matter interpretation of the Milky Way GeV gamma-ray excess.

  20. Observational Constraints of 30–40 GeV Dark Matter Annihilation in Galaxy Clusters

    Directory of Open Access Journals (Sweden)

    Man Ho Chan

    2016-01-01

    Full Text Available Recently, it has been shown that the annihilation of 30–40 GeV dark matter particles through bb- channel can satisfactorily explain the excess GeV gamma-ray spectrum near the Galactic Center. In this paper, we apply the above model to galaxy clusters and use the latest upper limits of gamma-ray flux derived from Fermi-LAT data to obtain an upper bound of the annihilation cross section of dark matter. By considering the extended density profiles and the cosmic ray profile models of 49 galaxy clusters, the upper bound of the annihilation cross section can be further tightened to σv≤9×10-26 cm3 s−1. This result is consistent with the one obtained from the data near the Galactic Center.

  1. SUSY Implications from WIMP Annihilation into Scalars at the Galactic Center

    CERN Document Server

    Medina, Anibal D

    2015-01-01

    An excess in $\\gamma$-rays emanating from the galactic centre has recently been observed in the Fermi-LAT data. We investigate the new exciting possibility of fitting the signal spectrum by dark matter annihilating dominantly to a Higgs-pseudoscalar pair. We show that the fit to the $\\gamma$-ray excess for the Higgs-pseudoscalar channel can be just as good as for annihilation into bottom-quark pairs. This channel arises naturally in a full model such as the next-to-minimal supersymmetric Standard Model (NMSSM) and we find regions where dark matter relic density, the $\\gamma$-ray signal and other experimental constraints, can all be satisfied simultaneously. Annihilation into scalar pairs allows for the possibility of detecting the Higgs or pseudoscalar decay into two photons, providing a smoking-gun signal of the model.

  2. SUSY-QCD corrections to the (co)annihilation of neutralino dark matter within the MSSM

    Energy Technology Data Exchange (ETDEWEB)

    Meinecke, Moritz

    2015-06-15

    Based on experimental observations, it is nowadays assumed that a large component of the matter content in the universe is comprised of so-called cold dark matter. Furthermore, latest measurements of the temperature fluctuations of the cosmic microwave background provided an estimation of the dark matter relic density at a measurement error of one percent (concerning the experimental 1σ-error). The lightest neutralino χ 0{sub 1}, a particle which subsumes under the phenomenologically interesting category of weakly interacting massive particles, is a viable dark matter candidate for many supersymmetric (SUSY) models whose relic density Ω{sub χ} {sub 0{sub 1}} happens to lie quite naturally within the experimentally favored ballpark of dark matter. The high experimental precision can be used to constrain the SUSY parameter space to its cosmologically favored regions and to pin down phenomenologically interesting scenarios. However, to actually benefit from this progress on the experimental side it is also mandatory to minimize the theoretical uncertainties. An important quantity within the calculation of the neutralino relic density is the thermally averaged sum over different annihilation and coannihilation cross sections of the neutralino and further supersymmetric particles. It is now assumed and also partly proven that these cross sections can be subject to large loop corrections which can even shift the associated Ω{sub χ} {sub 0{sub 1}} by a factor larger than the current experimental error. However, most of these corrections are yet unknown. In this thesis, we calculate higher-order corrections for some of the most important (co)annihilation channels both within the framework of the R-parity conserving Minimal Supersymmetric Standard Model (MSSM) and investigate their impact on the final neutralino relic density Ω{sub χ} {sub 0{sub 1}}. More precisely, this work provides the full O(α{sub s}) corrections of supersymmetric quantum chromodynamics (SUSY

  3. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    Science.gov (United States)

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. PMID:27154008

  4. Exact solutions for steady reconnective annihilation revisited

    Science.gov (United States)

    Titov, Vyacheslav S.; Tassi, Emanuele; Hornig, Gunnar

    2004-10-01

    This work complements the previous studies on steady reconnective magnetic annihilation in three different geometries: the two-dimensional Cartesian and polar ones and the three-dimensional (3D) cylindrical one. A special class of diffusive solutions is found analytically in explicit form for all of the three geometries. In the 3D case it is extended to a much wider class of exact solutions describing reconnective magnetic annihilation at the separatrix spine line of a magnetic null point. One of the obtained solutions provides an explicit expression for the Craig-Fabling solution. It is also identified which of the steady flow regimes found are dynamically accessible.

  5. A compact positron annihilation lifetime spectrometer

    Institute of Scientific and Technical Information of China (English)

    LI Dao-Wu; LIU Jun-Hui; ZHANG Zhi-Ming; WANG Bao-Yi; ZHANG Tian-Bao; WEI Long

    2011-01-01

    Using LYSO scintillator coupled on HAMAMATSU R9800(a fast photomultiplier)to form the small size γ-ray detectors,a compact lifetime spectrometer has been built for the positron annihilation experiments.The system time resolution FWHM=193 ps and the coincidence counting rate -8 cps/μCi were achieved.A lifetime value of 219±1 ps of positron annihilation in well annealed Si was tested,which is in agreement with the typical values published in the previous lectures.

  6. Electronic Structure of Rare-Earth Metals. II. Positron Annihilation

    DEFF Research Database (Denmark)

    Williams, R. W.; Mackintosh, Allan

    1968-01-01

    The angular correlation of the photons emitted when positrons annihilate with electrons has been studied in single crystals of the rare-earth metals Y, Gd, Tb, Dy, Ho, and Er, and in a single crystal of an equiatomic alloy of Ho and Er. A comparison of the results for Y with the calculations...... of Loucks shows that the independent-particle model gives a good first approximation to the angular distribution, although correlation effects probably smear out some of the structure. The angular distributions from the heavy rare-earth metals are very similar to that from Y and can be understood...... qualitatively in terms of the relativistic augmented-plane-wave calculations by Keeton and Loucks. The angular distributions in the c direction in the paramagnetic phases are characterized by a rapid drop at low angles followed by a hump, and these features are associated with rather flat regions of Fermi...

  7. Nanometer cavities studied by positron annihilation

    International Nuclear Information System (INIS)

    Positronium (Ps) is trapped in cavities in insulating solids, and the lifetime of ortho Ps is determined by the size of the cavity. The information on the properties of the cavities obtained by use of the standard slow positron beam and the 'normal' positron annihilation techniques is compared for several selected cases. (author)

  8. A positron annihilation study of hydrated DNA

    DEFF Research Database (Denmark)

    Warman, J. M.; Eldrup, Morten Mostgaard

    1986-01-01

    Positron annihilation measurements are reported for hydrated DNA as a function of water content and as a function of temperature (20 to -180.degree. C) for samples containing 10 and 50% wt of water. The ortho-positronium mean lifetime and its intensity show distinct variations with the degree...

  9. A compact positron annihilation lifetime spectrometer

    Institute of Scientific and Technical Information of China (English)

    李道武; 刘军辉; 章志明; 王宝义; 张天保; 魏龙

    2011-01-01

    Using LYSO scintillator coupled on HAMAMATSU R9800 (a fast photomultiplier) to form the small size γ-ray detectors, a compact lifetime spectrometer has been built for the positron annihilation experiments. The system time resolution FWHM=193 ps and the co

  10. Electrochemical and positron annihilation studies of semicarbazones and thiosemicarbazones derived from ferrocene

    International Nuclear Information System (INIS)

    A series of six ferrocene derivates containing a semicarbazone or thiosemicarbazone side chain was investigated by cyclic voltammetry and positron annihilation lifetime measurements. Both the redox and the electron capture processes took place on the Fe atom. Correlations between the two methods were proposed. taking into account the substituents on the side chain of the compounds, their redox potentials and the probabilities of o-positronium (o-Ps), formation. (author)

  11. Electrochemical and positron annihilation studies of semicarbazones and thiosemicarbazones derived from ferrocene

    Energy Technology Data Exchange (ETDEWEB)

    Graudo, Jose Eugenio J.C. [Juiz de Fora Univ., MG (Brazil). Dept. de Quimica; Filgueiras, Carlos A.L. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Inst. de Quimica. Dept. de Quimica Inorganica; Marques-Netto, Antonio [Minas Gerais Univ., Belo Horizonte, MG (Brazil). Dept. de Quimica; Batista, Alzir A. [Sao Carlos Univ., SP (Brazil). Dept. de Quimica

    2000-06-01

    A series of six ferrocene derivates containing a semicarbazone or thiosemicarbazone side chain was investigated by cyclic voltammetry and positron annihilation lifetime measurements. Both the redox and the electron capture processes took place on the Fe atom. Correlations between the two methods were proposed. taking into account the substituents on the side chain of the compounds, their redox potentials and the probabilities of o-positronium (o-Ps), formation. (author)

  12. Correlator Bank Detection of GW chirps. False-Alarm Probability, Template Density and Thresholds: Behind and Beyond the Minimal-Match Issue

    CERN Document Server

    Croce, R P; Longo, M J; Marano, S; Matta, V; Pierro, V; Pinto, I M; Demma, Th.

    2004-01-01

    The general problem of computing the false-alarm rate vs. detection-threshold relationship for a bank of correlators is addressed, in the context of maximum-likelihood detection of gravitational waves, with specific reference to chirps from coalescing binary systems. Accurate (lower-bound) approximants for the cumulative distribution of the whole-bank supremum are deduced from a class of Bonferroni-type inequalities. The asymptotic properties of the cumulative distribution are obtained, in the limit where the number of correlators goes to infinity. The validity of numerical simulations made on small-size banks is extended to banks of any size, via a gaussian-correlation inequality. The result is used to estimate the optimum template density, yielding the best tradeoff between computational cost and detection efficiency, in terms of undetected potentially observable sources at a prescribed false-alarm level, for the simplest case of Newtonian chirps.

  13. Probability density fittings of corrosion test-data: Implications on C6H15NO3 effectiveness on concrete steel-rebar corrosion

    Indian Academy of Sciences (India)

    Joshua Olusegun Okeniyi; Idemudia Joshua Ambrose; Stanley Okechukwu Okpala; Oluwafemi Michael Omoniyi; Isaac Oluwaseun Oladele; Cleophas Akintoye Loto; Patricia Abimbola Idowu Popoola

    2014-06-01

    In this study, corrosion test-data of steel-rebar in concrete were subjected to the fittings of the Normal, Gumbel and the Weibull probability distribution functions. This was done to investigate the suitability of the results of the fitted test-data, by these distributions, for modelling the effectiveness of C6H15NO3, triethanolamine (TEA), admixtures on the corrosion of steel-rebar in concrete in NaCl and in H2SO4 test-media. For this, six different concentrations of TEA were admixed in replicates of steel-reinforced concrete samples which were immersed in the saline/marine and the microbial/industrial simulating test-environments for seventy-five days. From these, distribution fittings of the non-destructive electrochemical measurements were subjected to the Kolmogorov–Smirnov goodness-of-fit statistics and to the analyses of variance modelling for studying test-data compatibility to the fittings and testing significance. Although all fittings of test-data followed similar trends of significant testing, the fittings of the corrosion rate test data followed the Weibull more than the Normal and the Gumbel distribution fittings, thus supporting use of the Weibull fittings for modelling effectiveness. The effectiveness models on rebar corrosion, based on these, identified 0.083% TEA with optimal inhibition efficiency, $\\eta =$ 72.17 ± 10.68%, in NaCl medium while 0.667% TEA was the only admixture with positive effectiveness, $\\eta =$ 56.45±15.85%, in H2SO4 medium. These results bear implications on the concentrations of TEA for effective corrosion protection of concrete steel-rebar in saline/marine and in industrial/microbial environments.

  14. Collision Probability Analysis

    DEFF Research Database (Denmark)

    Hansen, Peter Friis; Pedersen, Preben Terndrup

    1998-01-01

    It is the purpose of this report to apply a rational model for prediction of ship-ship collision probabilities as function of the ship and the crew characteristics and the navigational environment for MS Dextra sailing on a route between Cadiz and the Canary Islands.The most important ship and crew...... characteristics are: ship speed, ship manoeuvrability, the layout of the navigational bridge, the radar system, the number and the training of navigators, the presence of a look out etc. The main parameters affecting the navigational environment are ship traffic density, probability distributions of wind speeds...... probability, i.e. a study of the navigator's role in resolving critical situations, a causation factor is derived as a second step.The report documents the first step in a probabilistic collision damage analysis. Future work will inlcude calculation of energy released for crushing of structures giving...

  15. Neutrino flavor ratios as diagnostic of solar WIMP annihilation

    CERN Document Server

    Lehnert, Ralf

    2007-01-01

    We consider the neutrino (and antineutrino) flavors arriving at Earth for neutrinos produced in the annihilation of weakly interacting massive particles (WIMPs) in the Sun's core. Solar-matter effects on the flavor propagation of the resulting $\\agt$ GeV neutrinos are studied analytically within a density-matrix formalism. Matter effects, including mass-state level-crossings, influence the flavor fluxes considerably. The exposition herein is somewhat pedagogical, in that it starts with adiabatic evolution of single flavors from the Sun's center, with $\\theta_{13}$ set to zero, and progresses to fully realistic processing of the flavor ratios expected in WIMP decay, from the Sun's core to the Earth. In the fully realistic calculation, non-adiabatic level-crossing is included, as are possible nonzero values for $\\theta_{13}$ and the CP-violating phase $\\delta$. Due to resonance enhancement in matter, nonzero values of $\\theta_{13}$ even smaller than a degree can noticeably affect flavor propagation. Both normal...

  16. Positron annihilation: the ACAR method measures electron momentum distribution

    International Nuclear Information System (INIS)

    When a positron annihilates with an electron, the energy is dissipated preferentially in the form of antiparallel 0.5 MeV γ-rays, whose angle and Doppler shift correlates with electron momentum density. The Geneva group has built a system which permits the precise and efficient measurement of the ACAR radiation. In ordinary metals, where independent particles methods (IPM) apply, there is often satisfactory agreement between measured and calculated Two Particle Momentum Distributions (TPMD). The same is true for the Fermi Surfaces which can be constructed from TPMD. The effect of correlations can be handled as perturbation. In the case of oxides we found so far no convincing agreement between theory and experiment. We are working to improve apparatus, experiment and theory and hope to understand also our results in High Temperature Superconductors (High Tc Sc)

  17. Dark matter annihilation near a naked singularity

    CERN Document Server

    Patil, Mandar

    2011-01-01

    We investigate here the dark matter annihilation near a Kerr naked singularity. We show that when dark matter particles collide and annihilate in vicinity of the singularity, the escape fraction to infinity of particles produced is much larger, at least 10^2 - 10^3 times the corresponding black hole values. As high energy collisions are generically possible near a naked singularity, this provides an excellent environment for efficient conversion of dark matter into ordinary standard model particles. If the center of galaxy harbored such a naked singularity, it follows that the observed emergent flux of particles with energy comparable to mass of the dark matter particles is much larger compared to the blackhole case, thus providing an intriguing observational test on the nature of the galactic center

  18. Vector dark matter annihilation with internal bremsstrahlung

    CERN Document Server

    Bambhaniya, Gulab; Marfatia, Danny; Nayak, Alekha C; Tomar, Gaurav

    2016-01-01

    We consider scenarios in which the annihilation of self-conjugate spin-1 dark matter to a Standard Model fermion-antifermion final state is chirality suppressed, but where this suppression can be lifted by the emission of an additional photon via internal bremsstrahlung. We find that this scenario can only arise if the initial dark matter state is polarized, which can occur in the context of self-interacting dark matter. In particular, this is possible if the dark matter pair forms a bound state that decays to its ground state before the constituents annihilate. We show that the shape of the resulting photon spectrum is the same as for self-conjugate spin-0 and spin-1/2 dark matter, but the normalization is less heavily suppressed in the limit of heavy mediators.

  19. Shocking Signals of Dark Matter Annihilation

    CERN Document Server

    Davis, Jonathan H; Boehm, Celine; Kotera, Kumiko; Norman, Colin

    2015-01-01

    We examine whether charged particles injected by self-annihilating Dark Matter into regions undergoing Diffuse Shock Acceleration (DSA) can be accelerated to high energies. We consider three astrophysical sites where shock acceleration is supposed to occur, namely the Galactic Centre, galaxy clusters and Active Galactic Nuclei (AGN). For the Milky Way, we find that the acceleration of cosmic rays injected by dark matter could lead to a bump in the cosmic ray spectrum provided that the product of the efficiency of the acceleration mechanism and the concentration of DM particles is high enough. Among the various acceleration sources that we consider (namely supernova remnants (SNRs), Fermi bubbles and AGN jets), we find that the Fermi bubbles are a potentially more efficient accelerator than SNRs. However both could in principle accelerate electrons and protons injected by dark matter to very high energies. At the extragalactic level, the acceleration of dark matter annihilation products could be responsible fo...

  20. Baryon production in e+e- annihilation

    International Nuclear Information System (INIS)

    The phenomenology of baryon production in high energy e+e-annihilation is described. Much can be understood in terms of mass effects. Comparisons with the rates for different flavours and spins, with momentum and transverse momentum spectra and with particle correlations are used to confront models. Diquark models give good descriptions, except for the on/off Υ(1s) rates. Areas for experimental and theoretical development are indicated. (author)

  1. Searching for Dark Matter Annihilation in the Smith High-Velocity Cloud

    Science.gov (United States)

    Drlica-Wagner, Alex; Gomez-Vargas, German A.; Hewitt, John W.; Linden, Tim; Tibaldo, Luigi

    2014-01-01

    Recent observations suggest that some high-velocity clouds may be confined by massive dark matter halos. In particular, the proximity and proposed dark matter content of the Smith Cloud make it a tempting target for the indirect detection of dark matter annihilation. We argue that the Smith Cloud may be a better target than some Milky Way dwarf spheroidal satellite galaxies and use gamma-ray observations from the Fermi Large Area Telescope to search for a dark matter annihilation signal. No significant gamma-ray excess is found coincident with the Smith Cloud, and we set strong limits on the dark matter annihilation cross section assuming a spatially extended dark matter profile consistent with dynamical modeling of the Smith Cloud. Notably, these limits exclude the canonical thermal relic cross section (approximately 3 x 10 (sup -26) cubic centimeters per second) for dark matter masses less than or approximately 30 gigaelectronvolts annihilating via the B/B- bar oscillation or tau/antitau channels for certain assumptions of the dark matter density profile; however, uncertainties in the dark matter content of the Smith Cloud may significantly weaken these constraints.

  2. Searching for dark matter annihilation in the Smith high-velocity cloud

    International Nuclear Information System (INIS)

    Recent observations suggest that some high-velocity clouds may be confined by massive dark matter halos. In particular, the proximity and proposed dark matter content of the Smith Cloud make it a tempting target for the indirect detection of dark matter annihilation. We argue that the Smith Cloud may be a better target than some Milky Way dwarf spheroidal satellite galaxies and use γ-ray observations from the Fermi Large Area Telescope to search for a dark matter annihilation signal. No significant γ-ray excess is found coincident with the Smith Cloud, and we set strong limits on the dark matter annihilation cross section assuming a spatially extended dark matter profile consistent with dynamical modeling of the Smith Cloud. Notably, these limits exclude the canonical thermal relic cross section (∼ 3 × 10–26 cm3 s–1) for dark matter masses ≲ 30 GeV annihilating via the b b-bar or τ+τ– channels for certain assumptions of the dark matter density profile; however, uncertainties in the dark matter content of the Smith Cloud may significantly weaken these constraints.

  3. Multi-messenger constraints and pressure from dark matter annihilation into e--e+ pairs

    International Nuclear Information System (INIS)

    Despite striking evidence for the existence of dark matter from astrophysical observations, dark matter has still escaped any direct or indirect detection until today. Therefore a proof for its existence and the revelation of its nature belongs to one of the most intriguing challenges of nowadays cosmology and particle physics. The present work tries to investigate the nature of dark matter through indirect signatures from dark matter annihilation into electron-positron pairs in two different ways, pressure from dark matter annihilation and multi-messenger constraints on the dark matter annihilation cross-section. We focus on dark matter annihilation into electron-positron pairs and adopt a model-independent approach, where all the electrons and positrons are injected with the same initial energy E0∝mdmc2. The propagation of these particles is determined by solving the diffusion-loss equation, considering inverse Compton scattering, synchrotron radiation, Coulomb collisions, bremsstrahlung, and ionization. The first part of this work, focusing on pressure from dark matter annihilation, demonstrates that dark matter annihilation into electron-positron pairs may affect the observed rotation curve by a significant amount. The injection rate of this calculation is constrained by INTEGRAL, Fermi, and H.E.S.S. data. The pressure of the relativistic electron-positron gas is computed from the energy spectrum predicted by the diffusion-loss equation. For values of the gas density and magnetic field that are representative of the Milky Way, it is estimated that the pressure gradients are strong enough to balance gravity in the central parts if E00. By comparing the predicted rotation curves with observations of dwarf and low surface brightness galaxies, we show that the pressure from dark matter annihilation may improve the agreement between theory and observations in some cases, but it also imposes severe constraints on the model parameters (most notably, the inner slope

  4. Propensity, Probability, and Quantum Theory

    Science.gov (United States)

    Ballentine, Leslie E.

    2016-08-01

    Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.

  5. Positron annihilation spectroscopy in condensed matter

    International Nuclear Information System (INIS)

    The topic of positron annihilation spectroscopy (PAS) is the investigation of all aspects connected with the annihilation of slow positrons. This work deals with the application of PAS to different problems of materials science. The first chapter is an introduction to fundamental aspects of positron annihilation, as far as they are important to the different experimental techniques of PAS. Chapter 2 is concerned with the information obtainable by PAS. The three main experimental techniques of PAS (2γ-angular correlation, positron lifetime and Doppler broadening) are explained and problems in the application of these methods are discussed. Chapter 3 contains experimental results. According to the different fields of application it was subgrouped into: 1. Investigations of crystalline solids. Detection of structural defects in Cu, estimation of defect concentrations, study of the sintering of Cu powders as well as lattice defects in V3Si. 2. Chemical investigations. Structure of mixed solvents, selective solvation of mixed solvents by electrolytes as well as the micellization of sodium dodecylsulphate in aqueous solutions. 3. Investigations of glasses. Influence of heat treatment and production technology on the preorder of X-amorphous silica glass as well as preliminary measurements of pyrocerams. 4. Investigations of metallic glasses. Demonstration of the influence of production technology on parameters measurable by PAS. Chapter 4 contains a summary as well as an outlook of further applications of PAS to surface physics, medicine, biology and astrophysics. (author)

  6. Surfaces of colloidal PbSe nanocrystals probed by thin-film positron annihilation spectroscopy

    Directory of Open Access Journals (Sweden)

    L. Chai

    2013-08-01

    Full Text Available Positron annihilation lifetime spectroscopy and positron-electron momentum density (PEMD studies on multilayers of PbSe nanocrystals (NCs, supported by transmission electron microscopy, show that positrons are strongly trapped at NC surfaces, where they provide insight into the surface composition and electronic structure of PbSe NCs. Our analysis indicates abundant annihilation of positrons with Se electrons at the NC surfaces and with O electrons of the oleic ligands bound to Pb ad-atoms at the NC surfaces, which demonstrates that positrons can be used as a sensitive probe to investigate the surface physics and chemistry of nanocrystals inside multilayers. Ab initio electronic structure calculations provide detailed insight in the valence and semi-core electron contributions to the positron-electron momentum density of PbSe. Both lifetime and PEMD are found to correlate with changes in the particle morphology characteristic of partial ligand removal.

  7. LOVO Electrons: The Special Electrons of Molecules in Positron Annihilation Process

    Science.gov (United States)

    Ma, Xiaoguang; Wang, Lizhi; Yang, Chuanlu

    2014-05-01

    The electrons in the lowest occupied valence orbital (LOVO) of molecules have been found to dominate the gamma-ray spectra in the positron-electron annihilation process. The mechanism of this phenomenon is revealed in the present work for the first time. Theoretical quantitative analyses are applied to all noble gas atoms and molecules CH4, O2, C6H6, and C6H14. More than 70% of LOVO electrons and less than 30% of highest occupied molecular orbital (HOMO) electrons distribute within the full width at half-maximum (FWHM) region of the momentum spectra averagely. This indicates that the LOVO electrons have at least 2 times of probabilities than the HOMO electrons within this area. The predicted positron annihilation spectra are then generally dominated by the innermost LOVO electrons instead of the outmost HOMO electrons under the plane-wave approximation.

  8. Structure of water + acetonitrile solutions from acoustic and positron annihilation measurements

    Energy Technology Data Exchange (ETDEWEB)

    Jerie, Kazimierz [Institute of Experimental Physics, University of WrocIaw, WrocIaw (Poland); Baranowski, Andrzej [Institute of Experimental Physics, University of WrocIaw, WrocIaw (Poland); Koziol, Stan [Waters Corp., 34 Maple St., Milford, MA 01757 (United States); Glinski, Jacek [Faculty of Chemistry, University of WrocIaw, WrocIaw (Poland)]. E-mail: glin@wchuwr.chem.uni.wroc.pl; Burakowski, Andrzej [Faculty of Chemistry, University of WrocIaw, WrocIaw (Poland)

    2005-03-14

    We report the results of acoustic and positron annihilation measurements in aqueous solutions of acetonitrile (CH{sub 3}CN). Hydrophobicity of the solute is discussed, as well as the possibility of describing the title system in terms of hydrophobic solvation. A new method of calculating the 'ideal' positronium lifetimes is proposed, based on the mean volume of cavities (holes) in liquid structure available for positronium pseudoatom. The results are almost identical with those obtained from molar volumes using the concept of Levay et al. On the other hand, the same calculations performed using the 'bubble' model of annihilation yield very different results. It seems that either acetonitrile forms with water clathrate-like hydrates of untypical architecture, or it is too weak hydrophobic agent to form clathrate-like hydrates at all. The former interpretation seems to be more probable.

  9. Structure of Aqueous Solutions of Acetonitrile Investigated by Acoustic and Positron Annihilation Measurements

    Science.gov (United States)

    Jerie, K.; Baranowski, A.; Koziol, S.; Burakowski, A.

    2005-05-01

    We report the results of acoustic and positron annihilation measurements in aqueous solutions of acetonitrile (CH3CN). Hydrophobicity of the solute is discussed, as well as the possibility of describing the title system in terms of hydrophobic solvation. The concept of Levay et al. of calculating the "ideal positronium lifetimes is applied, basing on the mean volume of cavities (holes) in liquid structure available for positronium pseudoatom. The same calculations performed using the Tao model of annihilation yield very different results. It can be concluded that either acetonitrile forms with water clathrate-like hydrates of untypical architecture, or it is too weak hydrophobic agent to form clathrate-like hydrates at all. The former interpretation seems to be more probable.

  10. Structure of water + acetonitrile solutions from acoustic and positron annihilation measurements

    Science.gov (United States)

    Jerie, Kazimierz; Baranowski, Andrzej; Koziol, Stan; Gliński, Jacek; Burakowski, Andrzej

    2005-03-01

    We report the results of acoustic and positron annihilation measurements in aqueous solutions of acetonitrile (CH 3CN). Hydrophobicity of the solute is discussed, as well as the possibility of describing the title system in terms of hydrophobic solvation. A new method of calculating the "ideal" positronium lifetimes is proposed, based on the mean volume of cavities (holes) in liquid structure available for positronium pseudoatom. The results are almost identical with those obtained from molar volumes using the concept of Levay et al. On the other hand, the same calculations performed using the "bubble" model of annihilation yield very different results. It seems that either acetonitrile forms with water clathrate-like hydrates of untypical architecture, or it is too weak hydrophobic agent to form clathrate-like hydrates at all. The former interpretation seems to be more probable.

  11. Positron distribution contribution to changes in annihilation characteristics across Tc in high-temperature superconductors

    International Nuclear Information System (INIS)

    In this paper we present detailed calculations of the positron distribution in a host of high-temperature superconductors using the electron densities and potentials obtained from self-consistent orthogonalized linear combination of atomic orbitals band-structure calculations. The positron and electron densities obtained from the calculations are used to evaluate the electron-positron overlap function, which reveals that the major contribution to positron annihilation in these materials is from the oxygen atoms. A systematic correlation between the nature of this overlap function within the Cu-O cluster and the experimentally observed temperature dependence of the annihilation characteristics in the superconducting state is established: A decrease in positron annihilation parameters, below Tc, is observed when the overlap is predominantly from the apical oxygen atom, whereas an increase is observed if the overlap is predominantly from the planar oxygen atom. The observed temperature dependence of the positron parameters below Tc in all the high-Tc superconductors is understood in terms of an electron density transfer from the planar oxygen atoms to the apical oxygen atoms. These results are discussed in the light of charge-transfer models of superconductivity in the cuprate superconductors

  12. Weak annihilation cusp inside the dark matter spike about a black hole

    CERN Document Server

    Shapiro, Stuart L

    2016-01-01

    We reinvestigate the effect of annihilations on the distribution of collisionless dark matter (DM) in a spherical density spike around a massive black hole. We first construct a very simple, pedagogic, analytic model for an isotropic phase space distribution function that accounts for annihilation and reproduces the "weak cusp" found by Vasiliev for DM deep within the spike and away from its boundaries. The DM density in the cusp varies as $r^{-1/2}$ for $s$-wave annihilation, where $r$ is the distance from the central black hole, and is not a flat "plateau" profile. We then extend this model by incorporating a loss cone that accounts for the capture of DM particles by the hole. The loss cone is implemented by a boundary condition that removes capture orbits, resulting in an anisotropic distribution function. Finally, we evolve an initial spike distribution function by integrating the Boltzmann equation to show how the weak cusp grows and its density decreases with time. We treat two cases, one for $s$-wave a...

  13. Low mass stellar evolution with WIMP capture and annihilation

    CERN Document Server

    Scott, Pat; Fairbairn, Malcolm

    2007-01-01

    Recent work has indicated that WIMP annihilation in stellar cores has the potential to contribute significantly to a star's total energy production. We report on progress in simulating the effects of WIMP capture and annihilation upon stellar structure and evolution near supermassive black holes, using the new DarkStars code. Preliminary results indicate that low-mass stars are the most influenced by WIMP annihilation, which could have consequences for upcoming observational programs.

  14. Research of Dynamic Depreciation of Medical Equipment Based on x2 Distribution Probability Density Function%基于x2分布概率密度函数的医疗设备折旧法的研究

    Institute of Scientific and Technical Information of China (English)

    邓厚斌; 葛毅; 范璐敏; 刘晓雯; 李盈盈

    2012-01-01

    In order to proceed depreciation accounting of medical equipment reasonably, this paper analyses and compares the advantages and disadvantages of several common depreciation methods, with use efficiency of medical equipment, proposes distribution rule of static depreciation rate of x2 distribution probability density function, meanwhile, introduces benchmark benefit ratio of funds, establishes dynamic depreciation method of medical equipment.%为能够较合理地进行医疗设备的折旧核算,本文分析比较了常见的几种折旧方法的优缺点,结合医疗设备的使用效能,提出了医疗设备拟合x 2分布概率密度函数的静态折旧率分布规律,建立新的医疗设备折旧法.

  15. Particle-antiparticle asymmetries from annihilations

    CERN Document Server

    Baldes, Iason; Petraki, Kalliopi; Volkas, Raymond R

    2014-01-01

    An extensively studied mechanism to create particle-antiparticle asymmetries is the out-of-equilibrium and CP violating decay of a heavy particle. Here we instead examine how asymmetries can arise purely from 2 2 annihilations rather than from the usual 1 2 decays and inverse decays. We review the general conditions on the reaction rates that arise from S-matrix unitarity and CPT invariance, and show how these are implemented in the context of a simple toy model. We formulate the Boltzmann equations for this model, and present an example solution.

  16. Apparatus for photon activation positron annihilation analysis

    Science.gov (United States)

    Akers, Douglas W.

    2007-06-12

    Non-destructive testing apparatus according to one embodiment of the invention comprises a photon source. The photon source produces photons having predetermined energies and directs the photons toward a specimen being tested. The photons from the photon source result in the creation of positrons within the specimen being tested. A detector positioned adjacent the specimen being tested detects gamma rays produced by annihilation of positrons with electrons. A data processing system operatively associated with the detector produces output data indicative of a lattice characteristic of the specimen being tested.

  17. Vector dark matter annihilation with internal bremsstrahlung

    OpenAIRE

    Bambhaniya, Gulab; Kumar, Jason; Marfatia, Danny; Nayak, Alekha C.; Tomar, Gaurav

    2016-01-01

    We consider scenarios in which the annihilation of self-conjugate spin-1 dark matter to a Standard Model fermion-antifermion final state is chirality suppressed, but where this suppression can be lifted by the emission of an additional photon via internal bremsstrahlung. We find that this scenario can only arise if the initial dark matter state is polarized, which can occur in the context of self-interacting dark matter. In particular, this is possible if the dark matter pair forms a bound st...

  18. Probability Density Analysis of SINR in Massive MIMO Downlink Using Matched Filter Beamformer%基于MF预编码的大规模MIMO网络SINR概率密度分析

    Institute of Scientific and Technical Information of China (English)

    束锋; 李隽; 顾晨; 王进; 周叶; 徐彦青; 钱玉文

    2015-01-01

    大规模M IM O系统中,相对于其他基于信道矩阵分解的波束成形算法,如迫零、最小均方误差算法等,匹配滤波(Matched filter ,MF)具有复杂度极低的优点,从而成为一种极具实用潜力的波束成形算法。鉴于此,本文推导了基站采用M F波束成形算法时,用户端信干噪比(Signal‐to‐interference‐and‐noise ratio ,SINR)的近似概率密度函数(Probability density function ,PDF)。该函数对于推导与分析系统性能,如和速率、中断概率等至关重要。仿真表明:当基站天线数趋于大规模时,SINR公式的PDF曲线趋近于通过纯仿真得到的PDF曲线。%In massive MIMO systems ,the matched filter (MF) beamformer is an attractive technique due to its extremely low complexity compared with the high‐complexity decomposition‐based beamforming techniques ,such as zero forcing and minimum mean square error .An approximate formula is derived for probability density function (PDF) of the signal‐to‐interference‐and‐noise ratio (SINR) at user terminal when multiple antennas and the MF beamformer are used at the base station .The formula is important in calculating or analyzing system performance such as sum‐rate and outage probability .Simulations exhibit that the difference between the derived approximate formula for PDF and the simulated PDF approaches zero while the number of antennas at the base station tends to large‐scale .

  19. Probability theory and mathematical statistics for engineers

    CERN Document Server

    Pugachev, V S

    1984-01-01

    Probability Theory and Mathematical Statistics for Engineers focuses on the concepts of probability theory and mathematical statistics for finite-dimensional random variables.The publication first underscores the probabilities of events, random variables, and numerical characteristics of random variables. Discussions focus on canonical expansions of random vectors, second-order moments of random vectors, generalization of the density concept, entropy of a distribution, direct evaluation of probabilities, and conditional probabilities. The text then examines projections of random vector

  20. Cosmological dynamics in tomographic probability representation

    OpenAIRE

    Man'ko, V. I.; G. Marmo(Università di Napoli and INFN, Napoli, Italy); Stornaiolo, C.

    2004-01-01

    The probability representation for quantum states of the universe in which the states are described by a fair probability distribution instead of wave function (or density matrix) is developed to consider cosmological dynamics. The evolution of the universe state is described by standard positive transition probability (tomographic transition probability) instead of the complex transition probability amplitude (Feynman path integral) of the standard approach. The latter one is expressed in te...

  1. Dark Matter with multi-annihilation channels and AMS-02 positron excess and antiproton

    CERN Document Server

    Chen, Yu-Heng; Tseng, Po-Yan

    2015-01-01

    AMS-02 provided the unprecedented statistics in the measurement of the positron fraction from cosmic rays. That may offer a unique opportunity to distinguish the positron spectrum coming from various dark matter (DM) annihilation channels, if DM is the source of this positron excess. Therefore, we consider the scenario that the DM can annihilate into leptonic, quark, and massive gauge boson channels simultaneously with floating branching ratios to test this hypothesis. We also study the impacts from MAX, MED, and MIN diffusion models as well as from isothermal, NFW, and Einasto DM density profiles on our results. We found that under this DM annihilation scenario it is difficult to fit the AMS-02 $\\frac{e^+}{e^++e^-}$ data while evading the PAMELA $\\bar{p}/p$ constraint, except for the combination of MED diffusion model with the Einasto density profile, where the DM mass between 450 GeV to 1.2 TeV can satisfy both data sets at 95\\% CL. Finally, we compare to the newest AMS-02 antiproton data.

  2. Search for dark matter annihilation in the Galactic Center with IceCube-79

    Energy Technology Data Exchange (ETDEWEB)

    Aartsen, M.G.; Hill, G.C.; Robertson, S.; Whelan, B.J. [University of Adelaide, School of Chemistry and Physics, Adelaide, SA (Australia); Abraham, K.; Bernhard, A.; Coenders, S.; Gross, A.; Holzapfel, K.; Huber, M.; Jurkovic, M.; Krings, K.; Resconi, E.; Veenkamp, J. [Technische Universitaet Muenchen, Garching (Germany); Ackermann, M.; Berghaus, P.; Bernardini, E.; Bretz, H.P.; Cruz Silva, A.H.; Gluesenkamp, T.; Gora, D.; Jacobi, E.; Kaminsky, B.; Karg, T.; Middell, E.; Mohrmann, L.; Nahnhauer, R.; Schoenwald, A.; Shanidze, R.; Spiering, C.; Stasik, A.; Stoessl, A.; Strotjohann, N.L.; Terliuk, A.; Usner, M.; Yanez, J.P. [DESY, Zeuthen (Germany); Adams, J.; Brown, A.M. [University of Canterbury, Department of Physics and Astronomy, Private Bag 4800, Christchurch (New Zealand); Aguilar, J.A.; Heereman, D.; Meagher, K.; Meures, T.; O' Murchadha, A.; Pinat, E. [Universite Libre de Bruxelles, Science Faculty CP230, Brussels (Belgium); Ahlers, M.; Arguelles, C.; Beiser, E.; BenZvi, S.; Braun, J.; Chirkin, D.; Day, M.; Desiati, P.; Diaz-Velez, J.C.; Fadiran, O.; Fahey, S.; Feintzeig, J.; Ghorbani, K.; Gladstone, L.; Halzen, F.; Hanson, K.; Hoshina, K.; Jero, K.; Karle, A.; Kelley, J.L.; Kheirandish, A.; McNally, F.; Merino, G.; Middlemas, E.; Morse, R.; Richter, S.; Sabbatini, L.; Tobin, M.N.; Tosi, D.; Vandenbroucke, J.; Van Santen, J.; Wandkowsky, N.; Weaver, C.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wille, L. [Wisconsin IceCube Particle Astrophysics Center, University of Wisconsin, Department of Physics, Madison, WI (United States); Ahrens, M.; Bohm, C.; Dumm, J.P.; Finley, C.; Flis, S.; Hulth, P.O.; Hultqvist, K.; Walck, C.; Wolf, M.; Zoll, M. [Oskar Klein Centre, Stockholm University, Department of Physics, Stockholm (Sweden); Altmann, D.; Classen, L.; Kappes, A.; Tselengidou, M. [Friedrich-Alexander-Universitaet Erlangen-Nuernberg, Erlangen Centre for Astroparticle Physics, Erlangen (Germany); Anderson, T.; Arlen, T.C.; Dunkman, M.; Eagan, R.; Groh, J.C.; Huang, F.; Keivani, A.; Lanfranchi, J.L.; Quinnan, M.; Smith, M.W.E.; Stanisha, N.A.; Tesic, G. [Pennsylvania State University, Department of Physics, University Park, PA (United States); Archinger, M.; Baum, V.; Boeser, S.; Eberhardt, B.; Ehrhardt, T.; Koepke, L.; Kroll, G.; Luenemann, J.; Sander, H.G.; Schatto, K.; Wiebe, K. [University of Mainz, Institute of Physics, Mainz (Germany); Auffenberg, J.; Bissok, M.; Blumenthal, J.; Glagla, M.; Gier, D.; Gretskov, P.; Haack, C.; Hansmann, B.; Hellwig, D.; Kemp, J.; Konietz, R.; Koob, A.; Leuermann, M.; Leuner, J.; Paul, L.; Puetz, J.; Raedel, L.; Reimann, R.; Rongen, M.; Schimp, M.; Schoenen, S.; Schukraft, A.; Stahlberg, M.; Vehring, M.; Wallraff, M.; Wichary, C.; Wiebusch, C.H. [RWTH Aachen University, III. Physikalisches Institut, Aachen (Germany); Bai, X. [South Dakota School of Mines and Technology, Physics Department, Rapid City, SD (United States); Barwick, S.W.; Yodh, G. [University of California, Department of Physics and Astronomy, Irvine, CA (United States); Bay, R.; Filimonov, K.; Price, P.B.; Woschnagg, K. [University of California, Department of Physics, Berkeley, CA (United States); Beatty, J.J. [Ohio State University, Department of Physics and Center for Cosmology and Astro-Particle Physics, Columbus, OH (United States); Ohio State University, Department of Astronomy, Columbus, OH (United States); Becker Tjus, J.; Bos, F.; Eichmann, B.; Fedynitch, A.; Kroll, M.; Saba, S.M.; Schoeneberg, S. [Ruhr-Universitaet Bochum, Fakultaet fuer Physik and Astronomie, Bochum (Germany); Becker, K.H.; Bindig, D.; Fischer-Wasels, T.; Helbing, K.; Hickford, S.; Hoffmann, R.; Klaes, J.; Kopper, S.; Naumann, U.; Obertacke, A.; Omairat, A.; Posselt, J.; Soldin, D. [University of Wuppertal, Department of Physics, Wuppertal (Germany); Berley, D.; Blaufuss, E.; Cheung, E.; Christy, B.; Felde, J.; Hellauer, R.; Hoffman, K.D.; Huelsnitz, W.; Maunu, R.; Olivas, A.; Redl, P.; Schmidt, T.; Sullivan, G.W.; Wissing, H. [University of Maryland, Department of Physics, College Park, MD (United States); Besson, D.Z. [University of Kansas, Department of Physics and Astronomy, Lawrence, KS (United States); Binder, G.; Gerhardt, L.; Ha, C.; Klein, S.R.; Miarecki, S. [University of California, Department of Physics, Berkeley, CA (United States); Lawrence Berkeley National Laboratory, Berkeley, CA (United States); Boersma, D.J.; Botner, O.; Euler, S.; Hallgren, A.; Collaboration: IceCube Collaboration; and others

    2015-10-15

    The Milky Way is expected to be embedded in a halo of dark matter particles, with the highest density in the central region, and decreasing density with the halo-centric radius. Dark matter might be indirectly detectable at Earth through a flux of stable particles generated in dark matter annihilations and peaked in the direction of the Galactic Center. We present a search for an excess flux of muon (anti-) neutrinos from dark matter annihilation in the Galactic Center using the cubic-kilometer-sized IceCube neutrino detector at the South Pole. There, the Galactic Center is always seen above the horizon. Thus, new and dedicated veto techniques against atmospheric muons are required to make the southern hemisphere accessible for IceCube. We used 319.7 live-days of data from IceCube operating in its 79-string configuration during 2010 and 2011. No neutrino excess was found and the final result is compatible with the background. We present upper limits on the self-annihilation cross-section, left angle σ{sub A} right angle, for WIMP masses ranging from 30 GeV up to 10 TeV, assuming cuspy (NFW) and flat-cored (Burkert) dark matter halo profiles, reaching down to ≅ 4 . 10{sup -24} cm{sup 3}s{sup -1}, and ≅ 2.6 . 10{sup -23} cm{sup 3}s{sup -1} for the νanti ν channel, respectively. (orig.)

  3. Search for Dark Matter Annihilation in the Galactic Center with IceCube-79

    CERN Document Server

    Aartsen, M G; Ackermann, M; Adams, J; Aguilar, J A; Ahlers, M; Ahrens, M; Altmann, D; Anderson, T; Archinger, M; Arguelles, C; Arlen, T C; Auffenberg, J; Bai, X; Barwick, S W; Baum, V; Bay, R; Beatty, J J; Tjus, J Becker; Becker, K -H; Beiser, E; BenZvi, S; Berghaus, P; Berley, D; Bernardini, E; Bernhard, A; Besson, D Z; Binder, G; Bindig, D; Bissok, M; Blaufuss, E; Blumenthal, J; Boersma, D J; Bohm, C; Börner, M; Bos, F; Bose, D; Böser, S; Botner, O; Braun, J; Brayeur, L; Bretz, H -P; Brown, A M; Buzinsky, N; Casey, J; Casier, M; Cheung, E; Chirkin, D; Christov, A; Christy, B; Clark, K; Classen, L; Coenders, S; Cowen, D F; Silva, A H Cruz; Daughhetee, J; Davis, J C; Day, M; de André, J P A M; De Clercq, C; Dembinski, H; De Ridder, S; Desiati, P; de Vries, K D; de Wasseige, G; de With, M; DeYoung, T; Díaz-Vélez, J C; Dumm, J P; Dunkman, M; Eagan, R; Eberhardt, B; Ehrhardt, T; Eichmann, B; Euler, S; Evenson, P A; Fadiran, O; Fahey, S; Fazely, A R; Fedynitch, A; Feintzeig, J; Felde, J; Filimonov, K; Finley, C; Fischer-Wasels, T; Flis, S; Fuchs, T; Glagla, M; Gaisser, T K; Gaior, R; Gallagher, J; Gerhardt, L; Ghorbani, K; Gier, D; Gladstone, L; Glüsenkamp, T; Goldschmidt, A; Golup, G; Gonzalez, J G; Góra, D; Grant, D; Gretskov, P; Groh, J C; Groß, A; Ha, C; Haack, C; Ismail, A Haj; Hallgren, A; Halzen, F; Hansmann, B; Hanson, K; Hebecker, D; Heereman, D; Helbing, K; Hellauer, R; Hellwig, D; Hickford, S; Hignight, J; Hill, G C; Hoffman, K D; Hoffmann, R; Holzapfe, K; Homeier, A; Hoshina, K; Huang, F; Huber, M; Huelsnitz, W; Hulth, P O; Hultqvist, K; In, S; Ishihara, A; Jacobi, E; Japaridze, G S; Jero, K; Jurkovic, M; Kaminsky, B; Kappes, A; Karg, T; Karle, A; Kauer, M; Keivani, A; Kelley, J L; Kemp, J; Kheirandish, A; Kiryluk, J; Kläs, J; Klein, S R; Kohnen, G; Koirala, R; Kolanoski, H; Konietz, R; Koob, A; Köpke, L; Kopper, C; Kopper, S; Koskinen, D J; Kowalski, M; Krings, K; Kroll, G; Kroll, M; Kunnen, J; Kurahashi, N; Kuwabara, T; Labare, M; Lanfranchi, J L; Larson, M J; Lesiak-Bzdak, M; Leuermann, M; Leuner, J; Lünemann, J; Madsen, J; Maggi, G; Mahn, K B M; Maruyama, R; Mase, K; Matis, H S; Maunu, R; McNally, F; Meagher, K; Medici, M; Meli, A; Menne, T; Merino, G; Meures, T; Miarecki, S; Middell, E; Middlemas, E; Miller, J; Mohrmann, L; Montaruli, T; Morse, R; Nahnhauer, R; Naumann, U; Niederhausen, H; Nowicki, S C; Nygren, D R; Obertacke, A; Olivas, A; Omairat, A; O'Murchadha, A; Palczewski, T; Pandya, H; Paul, L; Pepper, J A; Heros, C Pérez de los; Pfendner, C; Pieloth, D; Pinat, E; Posselt, J; Price, P B; Przybylski, G T; Pütz, J; Quinnan, M; Rädel, L; Rameez, M; Rawlins, K; Redl, P; Reimann, R; Relich, M; Resconi, E; Rhode, W; Richman, M; Richter, S; Riedel, B; Robertson, S; Rongen, M; Rott, C; Ruhe, T; Ruzybayev, B; Ryckbosch, D; Saba, S M; Sabbatini, L; Sander, H -G; Sandrock, A; Sandroos, J; Sarkar, S; Schatto, K; Scheriau, F; Schimp, M; Schmidt, T; Schmitz, M; Schoenen, S; Schöneberg, S; Schönwald, A; Schukraft, A; Schulte, L; Seckel, D; Seunarine, S; Shanidze, R; Smith, M W E; Soldin, D; Spiczak, G M; Spiering, C; Stahlberg, M; Stamatikos, M; Stanev, T; Stanisha, N A; Stasik, A; Stezelberger, T; Stokstad, R G; Stößl, A; Strahler, E A; Ström, R; Strotjohann, N L; Sullivan, G W; Sutherland, M; Taavola, H; Taboada, I; Ter-Antonyan, S; Terliuk, A; Tešić, G; Tilav, S; Toale, P A; Tobin, M N; Tosi, D; Tselengidou, M; Unger, E; Usner, M; Vallecorsa, S; van Eijndhoven, N; Vandenbroucke, J; van Santen, J; Vanheule, S; Veenkamp, J; Vehring, M; Voge, M; Vraeghe, M; Walck, C; Wallraff, M; Wandkowsky, N; Weaver, Ch; Wendt, C; Westerhoff, S; Whelan, B J; Whitehorn, N; Wichary, C; Wiebe, K; Wiebusch, C H; Wille, L; Williams, D R; Wissing, H; Wolf, M; Wood, T R; Woschnagg, K; Xu, D L; Xu, X W; Xu, Y; Yanez, J P; Yodh, G; Yoshida, S; Zarzhitsky, P; Zoll, M

    2015-01-01

    The Milky Way is expected to be embedded in a halo of dark matter particles, with the highest density in the central region, and decreasing density with the halo-centric radius. Dark matter might be indirectly detectable at Earth through a flux of stable particles generated in dark matter annihilations and peaked in the direction of the Galactic Center. We present a search for an excess flux of muon (anti-) neutrinos from dark matter annihilation in the Galactic Center using the cubic-kilometer-sized IceCube neutrino detector at the South Pole. There, the Galactic Center is always seen above the horizon. Thus, new and dedicated veto techniques against atmospheric muons are required to make the southern hemisphere accessible for IceCube. We used 319.7 live-days of data from IceCube operating in its 79-string configuration during 2010 and 2011. No neutrino excess was found and the final result is compatible with the background. We present upper limits on the self-annihilation cross-section, $\\left$, for WIMP ma...

  4. Local electron-electron interaction strength in ferromagnetic nickel determined by spin-polarized positron annihilation

    Science.gov (United States)

    Ceeh, Hubert; Weber, Josef Andreass; Böni, Peter; Leitner, Michael; Benea, Diana; Chioncel, Liviu; Ebert, Hubert; Minár, Jan; Vollhardt, Dieter; Hugenschmidt, Christoph

    2016-02-01

    We employ a positron annihilation technique, the spin-polarized two-dimensional angular correlation of annihilation radiation (2D-ACAR), to measure the spin-difference spectra of ferromagnetic nickel. The experimental data are compared with the theoretical results obtained within a combination of the local spin density approximation (LSDA) and the many-body dynamical mean-field theory (DMFT). We find that the self-energy defining the electronic correlations in Ni leads to anisotropic contributions to the momentum distribution. By direct comparison of the theoretical and experimental results we determine the strength of the local electronic interaction U in ferromagnetic Ni as 2.0 ± 0.1 eV.

  5. Impact of dark matter decays and annihilations on structure formation

    NARCIS (Netherlands)

    Mapelli, M.; Ripamonti, E.

    2007-01-01

    Abstract: We derived the evolution of the energy deposition in the intergalactic medium (IGM) by different decaying (or annihilating) dark matter (DM) candidates. Heavy annihilating DM particles (with mass larger than a few GeV) have no influence on reionization and heating, even if we assume that a

  6. Direct evidence for positron annihilation from shallow traps

    DEFF Research Database (Denmark)

    Linderoth, Søren; Hidalgo, C.

    1987-01-01

    For deformed Ag the temperature dependence of the positron lifetime parameters is followed between 12 and 300 K. Clear direct evidence for positron trapping and annihilation at shallow traps, with a positron binding energy of 9±2 meV and annihilation characteristics very similar to those in the...

  7. CMB constraint on dark matter annihilation after Planck 2015

    Directory of Open Access Journals (Sweden)

    Masahiro Kawasaki

    2016-05-01

    Full Text Available We update the constraint on the dark matter annihilation cross section by using the recent measurements of the CMB anisotropy by the Planck satellite. We fully calculate the cascade of dark matter annihilation products and their effects on ionization, heating and excitation of the hydrogen, hence do not rely on any assumption on the energy fractions that cause these effects.

  8. Nucleon-antinucleon annihilation in chiral soliton model

    International Nuclear Information System (INIS)

    We investigate annihilation process of nucleons in the chiral soliton model by the path integral method. A soliton-antisoliton pair is shown to decay into mesons at range of about 1fm, defined by the S bar S potential. Contribution of the annihilation channel to the elastic scattering is discussed

  9. Cohomology of projective schemes: From annihilators to vanishing

    OpenAIRE

    Chardin, Marc

    2002-01-01

    We provide bounds on the Castelnuovo-Mumford regularity in terms of ``defining equations'' by using elements that annihilates some cohomology modules, inspired by works of Miyazaki, Nagel, Schenzel and Vogel. The elements in these annihilators are provided either by liaison or by tight closure theories. Our results hold in any characteristic.

  10. Nucleon-antinucleon annihilation in chiral soliton model

    International Nuclear Information System (INIS)

    We investigate annihilation process of nucleons in chiral soliton model by path integral method. Soliton-antisoliton pair is shown to decay into pions at range of order of about 1 fm, defined by SS-bar potential. Contribution of annihilation channel into elastic scattering is discussed. (author). 14 refs, 1 fig

  11. CMB constraint on dark matter annihilation after Planck 2015

    OpenAIRE

    Masahiro Kawasaki; Kazunori Nakayama; Toyokazu Sekiguchi

    2016-01-01

    We update the constraint on the dark matter annihilation cross section by using the recent measurements of the CMB anisotropy by the Planck satellite. We fully calculate the cascade of dark matter annihilation products and their effects on ionization, heating and excitation of the hydrogen, hence do not rely on any assumption on the energy fractions that cause these effects.

  12. Studies of defects and defect agglomerates by positron annihilation spectroscopy

    DEFF Research Database (Denmark)

    Eldrup, Morten Mostgaard; Singh, B.N.

    1997-01-01

    A brief introduction to positron annihilation spectroscopy (PAS), and in particular lo its use for defect studies in metals is given. Positrons injected into a metal may become trapped in defects such as vacancies, vacancy clusters, voids, bubbles and dislocations and subsequently annihilate from...... advantages of the use of PAS are pointed out. (C) 1997 Elsevier Science B.V....

  13. Neutrino signals from electroweak bremsstrahlung in solar WIMP annihilation

    Energy Technology Data Exchange (ETDEWEB)

    Bell, Nicole F.; Brennan, Amelia J.; Jacques, Thomas D., E-mail: n.bell@unimelb.edu.au, E-mail: a.brennan@pgrad.unimelb.edu.au, E-mail: thomas.jacques@asu.edu [ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, The University of Melbourne, Victoria 3010 (Australia)

    2012-10-01

    Bremsstrahlung of W and Z gauge bosons, or photons, can be an important dark matter annihilation channel. In many popular models in which the annihilation to a pair of light fermions is helicity suppressed, these bremsstrahlung processes can lift the suppression and thus become the dominant annihilation channels. The resulting dark matter annihilation products contain a large, energetic, neutrino component. We consider solar WIMP annihilation in the case where electroweak bremsstrahlung dominates, and calculate the resulting neutrino spectra. The flux consists of primary neutrinos produced in processes such as χχ→ν-bar νZ and χχ→ν-bar lW, and secondary neutrinos produced via the decays of gauge bosons and charged leptons. After dealing with the neutrino propagation and flavour evolution in the Sun, we consider the prospects for detection in neutrino experiments on Earth. We compare our signal with that for annihilation to W{sup +}W{sup −}, and show that, for a given annihilation rate, the bremsstrahlung annihilation channel produces a larger signal by a factor of a few.

  14. Photoinduced carrier annihilation in silicon pn junction

    Science.gov (United States)

    Sameshima, Toshiyuki; Motoki, Takayuki; Yasuda, Keisuke; Nakamura, Tomohiko; Hasumi, Masahiko; Mizuno, Toshihisa

    2015-08-01

    We report analysis of the photo-induced minority carrier effective lifetime (τeff) in a p+n junction formed on the top surfaces of a n-type silicon substrate by ion implantation of boron and phosphorus atoms at the top and bottom surfaces followed by activation by microwave heating. Bias voltages were applied to the p+ boron-doped surface with n+ phosphorus-doped surface kept at 0 V. The values of τeff were lower than 1 × 10-5 s under the reverse-bias condition. On the other hand, τeff markedly increased to 1.4 × 10-4 s as the forward-bias voltage increased to 0.7 V and then it leveled off when continuous-wave 635 nm light was illuminated at 0.74 mW/cm2 on the p+ surface. The carrier annihilation velocity S\\text{p + } at the p+ surface region was numerically estimated from the experimental τeff. S\\text{p + } ranged from 4000 to 7200 cm/s under the reverse-bias condition when the carrier annihilation velocity S\\text{n + } at the n+ surface region was assumed to be a constant value of 100 cm/s. S\\text{p + } markedly decreased to 265 cm/s as the forward-bias voltage increased to 0.7 V.

  15. Searching for Dark Matter Annihilation in M87

    CERN Document Server

    Saxena, Sheetal; Rüger, Michael; Summa, Alexander; Mannheim, Karl

    2011-01-01

    Clusters of galaxies, such as the Virgo cluster, host enormous quantities of dark matter, making them prime targets for efforts in indirect dark matter detection via potential radiative signatures from annihilation of dark matter particles and subsequent radiative losses of annihilation products. However, a careful study of ubiquitous astrophysical backgrounds is mandatory to single out potential evidence for dark matter annihilation. Here, we construct a multiwavelength spectral energy distribution for the central radio galaxy in the Virgo cluster, M87, using a state-of-the-art numerical Synchrotron Self Compton approach. Fitting recent Chandra, Fermi-LAT and Cherenkov observations, we probe different dark matter annihilation scenarios including a full treatment of the inverse Compton losses from electrons and positrons produced in the annihilation. It is shown that such a template can substantially improve upon existing dark matter detection limits.

  16. Constraints on dark matter annihilations from diffuse gamma-ray emission in the Galaxy

    Energy Technology Data Exchange (ETDEWEB)

    Tavakoli, Maryam; Evoli, Carmelo [II. Institut für Theoretische Physik, Universität Hamburg, Luruper Chaussee 149, 22761 Hamburg (Germany); Cholis, Ilias [Fermi National Accelerator Laboratory, Center for Particle Astrophysics, Batavia, IL 60510 (United States); Ullio, Piero, E-mail: maryam.tavakoli@desy.de, E-mail: cholis@fnal.gov, E-mail: carmelo.evoli@desy.de, E-mail: ullio@sissa.it [SISSA, Via Bonomea 265, 34136 Trieste (Italy)

    2014-01-01

    Recent advances in γ-ray cosmic ray, infrared and radio astronomy have allowed us to develop a significantly better understanding of the galactic medium properties in the last few years. In this work using the DRAGON code, that numerically solves the CR propagation equation and calculating γ-ray emissivities in a 2-dimensional grid enclosing the Galaxy, we study in a self consistent manner models for the galactic diffuse γ-ray emission. Our models are cross-checked to both the available CR and γ-ray data. We address the extend to which dark matter annihilations in the Galaxy can contribute to the diffuse γ-ray flux towards different directions on the sky. Moreover we discuss the impact that astrophysical uncertainties of non DM nature, have on the derived γ-ray limits. Such uncertainties are related to the diffusion properties on the Galaxy, the interstellar gas and the interstellar radiation field energy densities. Light ∼ 10 GeV dark matter annihilating dominantly to hadrons is more strongly constrained by γ-ray observations towards the inner parts of the Galaxy and influenced the most by assumptions of the gas distribution; while TeV scale DM annihilating dominantly to leptons has its tightest constraints from observations towards the galactic center avoiding the galactic disk plane, with the main astrophysical uncertainty being the radiation field energy density. In addition, we present a method of deriving constraints on the dark matter distribution profile from the diffuse γ-ray spectra. These results critically depend on the assumed mass of the dark matter particles and the type of its end annihilation products.

  17. Asymptotic Expansions of the Probability Density Function and the Distribution Function of Chi-Square Distribution%卡方分布密度函数与分布函数的渐近展开

    Institute of Scientific and Technical Information of China (English)

    陈刚; 王梦婕

    2014-01-01

    通过对χ2分布概率密度函数的自变量进行标准化变换,将其展开成如下形式:2nχ2( x;n)=1+r1(t)n +r2(t)n +r3(t)n n +r4(t)n2éëùûφ(t)+o 1n2(),其中n为自由度,φ(t)为标准正态分布的密度函数,ri(t)(1≤i≤4)均为关于t的多项式。从该展开式得到χ2分布密度函数的一个近似计算公式。进一步建立φ( t)的幂系数积分递推关系,得到χ2分布函数的渐近展开式。最后通过数值计算验证了这些结果在实际应用中的有效性。%Through the transformation of the independent variable of χ2 distribution probability density function,degree of freedom of which is n,the equation can be expanded as follows: 2nχ2(x;n)=f(t;n)= 1+r1(t)n +r2(t)n +r3(t)n n +r4(t)n2éë ùûφ(t)+o 1n2( ) ,here,φ(t) is a density function of standard normal distribution;ri(t) is a 3i order polynomial of t(1≤i≤4). An approximate formula can be obtained from the expansion of the distribution density function. We further establish the integral recurrence relations of the power coefficients of the standard normal density function and obtain the asymptotic expansion of the distribution function ofχ2 . Finally,the effectiveness of these results in practical application was verified by the numerical calculations.

  18. State-selective high-energy excitation of nuclei by resonant positron annihilation

    Energy Technology Data Exchange (ETDEWEB)

    Belov, Nikolay A., E-mail: belov@mpi-hd.mpg.de; Harman, Zoltán

    2015-02-04

    In the annihilation of a positron with a bound atomic electron, the virtual γ photon created may excite the atomic nucleus. We put forward this effect as a spectroscopic tool for an energy-selective excitation of nuclear transitions. This scheme can efficiently populate nuclear levels of arbitrary multipolarities in the MeV regime, including giant resonances and monopole transitions. In certain cases, it may have higher cross sections than the conventionally used Coulomb excitation and it can even occur with high probability when the latter is energetically forbidden.

  19. State-selective high-energy excitation of nuclei by resonant positron annihilation

    Directory of Open Access Journals (Sweden)

    Nikolay A. Belov

    2015-02-01

    Full Text Available In the annihilation of a positron with a bound atomic electron, the virtual γ photon created may excite the atomic nucleus. We put forward this effect as a spectroscopic tool for an energy-selective excitation of nuclear transitions. This scheme can efficiently populate nuclear levels of arbitrary multipolarities in the MeV regime, including giant resonances and monopole transitions. In certain cases, it may have higher cross sections than the conventionally used Coulomb excitation and it can even occur with high probability when the latter is energetically forbidden.

  20. Effects of velocity-dependent dark matter annihilation on the energy spectrum of the extragalactic gamma-ray background

    Science.gov (United States)

    Campbell, Sheldon; Dutta, Bhaskar; Komatsu, Eiichiro

    2010-11-01

    We calculate the effects of velocity-dependent dark matter annihilation cross sections on the intensity of the extragalactic gamma-ray background. Our formalism does not assume a locally thermal distribution of dark matter particles in phase space, and is valid for arbitrary velocity-dependent annihilation. Although the model of the dark matter distribution we use is simple and may not describe nature precisely, it is sufficient for quantifying the effects of velocity-dependent annihilations: different halo models would be expected to produce the same general features. As concrete examples, we calculate the effects of p-wave annihilation (with the v-weighted cross section of σv=a+bv2) on the mean intensity of extragalactic gamma rays produced in cosmological dark matter halos. This velocity variation makes the shape of the energy spectrum harder, but this change in the shape is too small to see unless b/a≳106. While we find no such models in the parameter space of the minimal supersymmetric standard model, we show that it is possible to find b/a≳106 in the extension MSSM⊗U(1)B-L. However, we find that the most dominant effect of the p-wave annihilation is the suppression of the amplitude of the gamma-ray background. A nonzero b at the dark matter freeze-out epoch requires a smaller value of a in order for the relic density constraint to be satisfied, suppressing the amplitude by a factor as low as 10-6 for a thermal relic. Nonthermal relics will have weaker amplitude suppression. As another velocity-dependent effect, we calculate the spectrum for s-wave annihilation into fermions enhanced by the attractive Sommerfeld effect. Resonances associated with this effect result in significantly enhanced intensities, with a slightly softer energy spectrum.

  1. Holographic Vortex Pair Annihilation in Superfluid Turbulence

    CERN Document Server

    Du, Yiqiang; Tian, Yu; Zhang, Hongbao

    2014-01-01

    We make a first principles investigation of the dynamical evolution of vortex number in a two-dimensional (2D) turbulent superfluid by holography through numerically solving its highly non-trivial gravity dual. With the randomly placed vortices and antivortices prepared as initial states, we find that the temporal evolution of the vortex number can be well fit statistically by two-body decay due to the vortex pair annihilation featured relaxation process remarkably from a very early time on. In particular, subtracted by the universal offset, the power law fit indicates that our holographic turbulent superfluid exhibits an apparently different decay pattern from the superfluid recently experimented in highly oblate Bose-Einstein condensates.

  2. Simulation of the annihilation emission of galactic positrons

    International Nuclear Information System (INIS)

    Positrons annihilate in the central region of our Galaxy. This has been known since the detection of a strong emission line centered on an energy of 511 keV in the direction of the Galactic center. This gamma-ray line is emitted during the annihilation of positrons with electrons from the interstellar medium. The spectrometer SPI, onboard the INTEGRAL observatory, performed spatial and spectral analyses of the positron annihilation emission. This thesis presents a study of the Galactic positron annihilation emission based on models of the different interactions undergone by positrons in the interstellar medium. The models are relied on our present knowledge of the properties of the interstellar medium in the Galactic bulge, where most of the positrons annihilate, and of the physics of positrons (production, propagation and annihilation processes). In order to obtain constraints on the positrons sources and physical characteristics of the annihilation medium, we compared the results of the models to measurements provided by the SPI spectrometer. (author)

  3. Probability density functions characterizing PSC particle size distribution parameters for NAT and STS derived from in situ measurements between 1989 and 2010 above McMurdo Station, Antarctica, and between 1991-2004 above Kiruna, Sweden

    Science.gov (United States)

    Deshler, Terry

    2016-04-01

    Balloon-borne optical particle counters were used to make in situ size resolved particle concentration measurements within polar stratospheric clouds (PSCs) over 20 years in the Antarctic and over 10 years in the Arctic. The measurements were made primarily during the late winter in the Antarctic and in the early and mid-winter in the Arctic. Measurements in early and mid-winter were also made during 5 years in the Antarctic. For the analysis bimodal lognormal size distributions are fit to 250 meter averages of the particle concentration data. The characteristics of these fits, along with temperature, water and nitric acid vapor mixing ratios, are used to classify the PSC observations as either NAT, STS, ice, or some mixture of these. The vapor mixing ratios are obtained from satellite when possible, otherwise assumptions are made. This classification of the data is used to construct probability density functions for NAT, STS, and ice number concentration, median radius and distribution width for mid and late winter clouds in the Antarctic and for early and mid-winter clouds in the Arctic. Additional analysis is focused on characterizing the temperature histories associated with the particle classes and the different time periods. The results from theses analyses will be presented, and should be useful to set bounds for retrievals of PSC properties from remote measurements, and to constrain model representations of PSCs.

  4. Probability 1/e

    Science.gov (United States)

    Koo, Reginald; Jones, Martin L.

    2011-01-01

    Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.

  5. Probability an introduction

    CERN Document Server

    Goldberg, Samuel

    1960-01-01

    Excellent basic text covers set theory, probability theory for finite sample spaces, binomial theorem, probability distributions, means, standard deviations, probability function of binomial distribution, more. Includes 360 problems with answers for half.

  6. Current-induced spin polarization on a Pt surface: A new approach using spin-polarized positron annihilation spectroscopy

    International Nuclear Information System (INIS)

    Transversely spin-polarized positrons were injected near Pt and Au surfaces under an applied electric current. The three-photon annihilation of spin-triplet positronium, which was emitted from the surfaces into vacuum, was observed. When the positron spin polarization was perpendicular to the current direction, the maximum asymmetry of the three-photon annihilation intensity was observed upon current reversal for the Pt surfaces, whereas it was significantly reduced for the Au surface. The experimental results suggest that electrons near the Pt surfaces were in-plane and transversely spin-polarized with respect to the direction of the electric current. The maximum electron spin polarization was estimated to be more than 0.01 (1%). - Highlights: • Annihilation probability of positronium emitted from the Pt surface into the vacuum under direct current exhibited asymmetry upon current reversal. • The maximum asymmetry appeared when positron spin polarization and the direct current were perpendicular to each other. • Electrons near the Pt surfaces were in-plane and transversely spin-polarized with respect to the direction of the electric current. • Spin-polarized positronium annihilation provides a unique tool for investigating spin polarization on metal surfaces

  7. Extragalactic Inverse Compton Light from Dark Matter Annihilation and the Pamela Positron Excess

    CERN Document Server

    Profumo, Stefano

    2009-01-01

    We calculate the extragalactic diffuse emission originating from the up-scattering of cosmic microwave photons by energetic electrons and positrons produced in particle dark matter annihilation events at all redshifts and in all halos. We outline the observational constraints on this emission and we study its dependence on both the particle dark matter model (including the particle mass and its dominant annihilation final state) and on assumptions on structure formation and on the density profile of halos. We find that for low-mass dark matter models, data in the X-ray band provide the most stringent constraints, while the gamma-ray energy range probes models featuring large masses and pair-annihilation rates, and a hard spectrum for the injected electrons and positrons. Specifically, we point out that the all-redshift, all-halo inverse Compton emission from many dark matter models that might provide an explanation to the anomalous positron fraction measured by the Pamela payload severely overproduces the obs...

  8. Extragalactic Inverse Compton Light from Dark Matter annihilation and the Pamela positron excess

    Energy Technology Data Exchange (ETDEWEB)

    Profumo, Stefano [Department of Physics, University of California, 1156 High St, Santa Cruz, CA 95064 (United States); Jeltema, Tesla E., E-mail: profumo@scipp.ucsc.edu, E-mail: tesla@ucolick.org [UCO/Lick Observatories, 1156 High St, Santa Cruz, CA 95064 (United States)

    2009-07-01

    We calculate the extragalactic diffuse emission originating from the up-scattering of cosmic microwave photons by energetic electrons and positrons produced in particle dark matter annihilation events at all redshifts and in all halos. We outline the observational constraints on this emission and we study its dependence on both the particle dark matter model (including the particle mass and its dominant annihilation final state) and on assumptions on structure formation and on the density profile of halos. We find that for low-mass dark matter models, data in the X-ray band provide the most stringent constraints, while the gamma-ray energy range probes models featuring large masses and pair-annihilation rates, and a hard spectrum for the injected electrons and positrons. Specifically, we point out that the all-redshift, all-halo inverse Compton emission from many dark matter models that might provide an explanation to the anomalous positron fraction measured by the Pamela payload severely overproduces the observed extragalactic gamma-ray background.

  9. Multi-photon creation and single-photon annihilation of electron-positron pairs

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Huayu

    2011-04-27

    In this thesis we study multi-photon e{sup +}e{sup -} pair production in a trident process, and singlephoton e{sup +}e{sup -} pair annihilation in a triple interaction. The pair production is considered in the collision of a relativistic electron with a strong laser beam, and calculated within the theory of laser-dressed quantum electrodynamics. A regularization method is developed systematically for the resonance problem arising in the multi-photon process. Total production rates, positron spectra, and relative contributions of different reaction channels are obtained in various interaction regimes. Our calculation shows good agreement with existing experimental data from SLAC, and adds further insights into the experimental findings. Besides, we study the process in a manifestly nonperturbative domain, whose accessibility to future all-optical experiments based on laser acceleration is shown. In the single-photon e{sup +}e{sup -} pair annihilation, the recoil momentum is absorbed by a spectator particle. Various kinematic configurations of the three incoming particles are examined. Under certain conditions, the emitted photon exhibits distinct angular and polarization distributions which could facilitate the detection of the process. Considering an equilibrium relativistic e{sup +}e{sup -} plasma, it is found that the single-photon process becomes the dominant annihilation channel for plasma temperatures above 3 MeV. Multi-particle correlation effects are therefore essential for the e{sup +}e{sup -} dynamics at very high density. (orig.)

  10. Optical and microstructural characterization of porous silicon using photoluminescence, SEM and positron annihilation spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, C K [Department of Physics, University of Hong Kong, Pokfulam Road, Hong Kong (China); Nahid, F [Department of Physics, University of Hong Kong, Pokfulam Road, Hong Kong (China); Cheng, C C [Department of Physics, University of Hong Kong, Pokfulam Road, Hong Kong (China); Beling, C D [Department of Physics, University of Hong Kong, Pokfulam Road, Hong Kong (China); Fung, S [Department of Physics, University of Hong Kong, Pokfulam Road, Hong Kong (China); Ling, C C [Department of Physics, University of Hong Kong, Pokfulam Road, Hong Kong (China); Djurisic, A B [Department of Physics, University of Hong Kong, Pokfulam Road, Hong Kong (China); Pramanik, C [Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata 700032 (India); Saha, H [Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata 700032 (India); Sarkar, C K [Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata 700032 (India)

    2007-12-05

    We have studied the dependence of porous silicon morphology and porosity on fabrication conditions. N-type (100) silicon wafers with resistivity of 2-5 {omega} cm were electrochemically etched at various current densities and anodization times. Surface morphology and the thickness of the samples were examined by scanning electron microscopy (SEM). Detailed information of the porous silicon layer morphology with variation of preparation conditions was obtained by positron annihilation spectroscopy (PAS): the depth-defect profile and open pore interconnectivity on the sample surface has been studied using a slow positron beam. Coincidence Doppler broadening spectroscopy (CDBS) was used to study the chemical environment of the samples. The presence of silicon micropores with diameter varying from 1.37 to 1.51 nm was determined by positron lifetime spectroscopy (PALS). Visible luminescence from the samples was observed, which is considered to be a combination effect of quantum confinement and the effect of Si = O double bond formation near the SiO{sub 2}/Si interface according to the results from photoluminescence (PL) and positron annihilation spectroscopy measurements. The work shows that the study of the positronium formed when a positron is implanted into the porous surface provides valuable information on the pore distribution and open pore interconnectivity, which suggests that positron annihilation spectroscopy is a useful tool in the porous silicon micropores' characterization.

  11. Multi-photon creation and single-photon annihilation of electron-positron pairs

    International Nuclear Information System (INIS)

    In this thesis we study multi-photon e+e- pair production in a trident process, and singlephoton e+e- pair annihilation in a triple interaction. The pair production is considered in the collision of a relativistic electron with a strong laser beam, and calculated within the theory of laser-dressed quantum electrodynamics. A regularization method is developed systematically for the resonance problem arising in the multi-photon process. Total production rates, positron spectra, and relative contributions of different reaction channels are obtained in various interaction regimes. Our calculation shows good agreement with existing experimental data from SLAC, and adds further insights into the experimental findings. Besides, we study the process in a manifestly nonperturbative domain, whose accessibility to future all-optical experiments based on laser acceleration is shown. In the single-photon e+e- pair annihilation, the recoil momentum is absorbed by a spectator particle. Various kinematic configurations of the three incoming particles are examined. Under certain conditions, the emitted photon exhibits distinct angular and polarization distributions which could facilitate the detection of the process. Considering an equilibrium relativistic e+e- plasma, it is found that the single-photon process becomes the dominant annihilation channel for plasma temperatures above 3 MeV. Multi-particle correlation effects are therefore essential for the e+e- dynamics at very high density. (orig.)

  12. On the Sunyaev-Zel'dovich effect from dark matter annihilation or decay in galaxy clusters

    CERN Document Server

    Lavalle, Julien; Barthes, Julien

    2009-01-01

    We revisit the prospects for detecting the Sunyaev Zel'dovich (SZ) effect induced by dark matter (DM) annihilation or decay. We show that with standard (or even extreme) assumptions for properties of the DM particles and the DM halo profile, the optical depth associated with the relativistic electrons injected from DM annihilation or decay is much smaller than that associated with the thermal electrons, when averaged over the angular resolution of current and future experiments. For example we find: $\\tau_{\\rm DM} \\sim 10^{-7}-10^{-6}$ for $\\mchi = 1$ GeV and a density profile $\\rho\\propto r^{-1}$ for a template cluster located at 50 Mpc and observed within an angular resolution of $10"$, compared to $\\tau_{\\rm th}\\sim 10^{-3}-10^{-2}$. This, together with a full spectral analysis, enables us to demonstrate that, for a template cluster with properties close to those of the nearby ones, the SZ effect due to DM annihilation or decay is far below the sensitivity of the Planck satellite. This is at variance with ...

  13. A new scalar mediated WIMPs with pairs of on-shell mediators in annihilations

    CERN Document Server

    Jia, Lian-Bao

    2016-01-01

    In this article, we focus on a new scalar $\\phi$ mediated scalar/vectoral WIMPs (weakly interacting massive particles). To explain the Galactic center 1 - 3 GeV gamma-ray excess, here we consider the case that a WIMP pair predominantly annihilates into an on-shell $\\phi \\phi$ pair which mainly decays to $\\tau \\bar{\\tau}$, with masses of WIMPs in a range about 14 - 22 GeV. For the mass of $\\phi$ slightly below the WIMP mass, the annihilations of WIMPs are phase space suppressed today, and the required thermally averaged annihilation cross section of WIMPs can be derived to meet the GeV gamma-ray excess. A small scalar mediator-Higgs field mixing is introduced, which is available in interpreting the GeV gamma-ray excess. With the constraints of the dark matter relic density, the indirect detection result, the collider experiment, the thermal equilibrium of the early universe and the dark matter direct detection experiment are considered, we find there are parameter spaces left. The WIMPs may be detectable at th...

  14. Hypernormal Densities

    OpenAIRE

    Giacomini, Raffaella; Gottschling, Andreas; Haefke, Christian; White, Halbert

    2002-01-01

    We derive a new family of probability densities that have the property of closed-form integrability. This flexible family finds a variety of applications, of which we illustrate density forecasting from models of the AR-ARCH class for U.S. inflation. We find that the hypernormal distribution for the model's disturbances leads to better density forecasts than the ones produced under the assumption that the disturbances are Normal or Student's t.

  15. Constraints on dark matter annihilation in clusters of galaxies with the Fermi large area telescope

    Energy Technology Data Exchange (ETDEWEB)

    Ackermann, M.; Ajello, M.; Allafort, A.; Bechtol, K.; Blandford, R.D.; Bloom, E.D.; Borgland, A.W.; Bouvier, A.; Buehler, R. [W.W. Hansen Experimental Physics Laboratory, Kavli Institute for Particle Astrophysics and Cosmology, Department of Physics and SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94305 (United States); Baldini, L.; Bellazzini, R.; Bregeon, J. [Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, I-56127 Pisa (Italy); Ballet, J. [Laboratoire AIM, CEA-IRFU/CNRS/Université Paris Diderot, Service d' Astrophysique, CEA Saclay, 91191 Gif sur Yvette (France); Barbiellini, G. [Istituto Nazionale di Fisica Nucleare, Sezione di Trieste, I-34127 Trieste (Italy); Bastieri, D.; Buson, S. [Istituto Nazionale di Fisica Nucleare, Sezione di Padova, I-35131 Padova (Italy); Bonamente, E. [Istituto Nazionale di Fisica Nucleare, Sezione di Perugia, I-06123 Perugia (Italy); Brandt, T.J. [Centre d' Étude Spatiale des Rayonnements, CNRS/UPS, BP 44346, F-30128 Toulouse Cedex 4 (France); Brigida, M. [Dipartimento di Fisica ' ' M. Merlin' ' dell' Università e del Politecnico di Bari, I-70126 Bari (Italy); Bruel, P., E-mail: tesla@ucolick.org, E-mail: profumo@scipp.ucsc.edu [Laboratoire Leprince-Ringuet, École polytechnique, CNRS/IN2P3, Palaiseau (France); and others

    2010-05-01

    Nearby clusters and groups of galaxies are potentially bright sources of high-energy gamma-ray emission resulting from the pair-annihilation of dark matter particles. However, no significant gamma-ray emission has been detected so far from clusters in the first 11 months of observations with the Fermi Large Area Telescope. We interpret this non-detection in terms of constraints on dark matter particle properties. In particular for leptonic annihilation final states and particle masses greater than ∼ 200 GeV, gamma-ray emission from inverse Compton scattering of CMB photons is expected to dominate the dark matter annihilation signal from clusters, and our gamma-ray limits exclude large regions of the parameter space that would give a good fit to the recent anomalous Pamela and Fermi-LAT electron-positron measurements. We also present constraints on the annihilation of more standard dark matter candidates, such as the lightest neutralino of supersymmetric models. The constraints are particularly strong when including the fact that clusters are known to contain substructure at least on galaxy scales, increasing the expected gamma-ray flux by a factor of ∼ 5 over a smooth-halo assumption. We also explore the effect of uncertainties in cluster dark matter density profiles, finding a systematic uncertainty in the constraints of roughly a factor of two, but similar overall conclusions. In this work, we focus on deriving limits on dark matter models; a more general consideration of the Fermi-LAT data on clusters and clusters as gamma-ray sources is forthcoming.

  16. Estimating Small Probabilities for Langevin Dynamics

    OpenAIRE

    Aristoff, David

    2012-01-01

    The problem of estimating small transition probabilities for overdamped Langevin dynamics is considered. A simplification of Girsanov's formula is obtained in which the relationship between the infinitesimal generator of the underlying diffusion and the change of probability measure corresponding to a change in the potential energy is made explicit. From this formula an asymptotic expression for transition probability densities is derived. Separately the problem of estimating the probability ...

  17. Spectral Gamma-ray Signatures of Cosmological Dark Matter Annihilation

    CERN Document Server

    Bergström, L; Ullio, P; Bergstrom, Lars; Edsjo, Joakim; Ullio, Piero

    2001-01-01

    We propose a new signature for weakly interacting massive particle (WIMP) dark matter, a spectral feature in the diffuse extragalactic gamma-ray radiation. This feature, a sudden drop of the gamma-ray intensity at an energy corresponding to the WIMP mass, comes from the asymmetric distortion of the line due to WIMP annihilation into two gamma-rays caused by the cosmological redshift. Unlike other proposed searches for a line signal, this method is not very sensitive to the exact dark matter density distribution in halos and subhalos. The only requirement is that the mass distribution of substructure on small scales follows approximately the Press-Schechter law, and that smaller halos are on the average denser than large halos, which is a generic outcome of N-body simulations of Cold Dark Matter, and which has observational support. The upcoming Gamma-ray Large Area Space Telescope (GLAST) will be eminently suited to search for these spectral features. For numerical examples, we use rates computed for supersym...

  18. Antiproton annihilation physics in the Monte Carlo particle transport code SHIELD-HIT12A

    Energy Technology Data Exchange (ETDEWEB)

    Taasti, Vicki Trier; Knudsen, Helge [Dept. of Physics and Astronomy, Aarhus University (Denmark); Holzscheiter, Michael H. [Dept. of Physics and Astronomy, Aarhus University (Denmark); Dept. of Physics and Astronomy, University of New Mexico (United States); Sobolevsky, Nikolai [Institute for Nuclear Research of the Russian Academy of Sciences (INR), Moscow (Russian Federation); Moscow Institute of Physics and Technology (MIPT), Dolgoprudny (Russian Federation); Thomsen, Bjarne [Dept. of Physics and Astronomy, Aarhus University (Denmark); Bassler, Niels, E-mail: bassler@phys.au.dk [Dept. of Physics and Astronomy, Aarhus University (Denmark)

    2015-03-15

    The Monte Carlo particle transport code SHIELD-HIT12A is designed to simulate therapeutic beams for cancer radiotherapy with fast ions. SHIELD-HIT12A allows creation of antiproton beam kernels for the treatment planning system TRiP98, but first it must be benchmarked against experimental data. An experimental depth dose curve obtained by the AD-4/ACE collaboration was compared with an earlier version of SHIELD-HIT, but since then inelastic annihilation cross sections for antiprotons have been updated and a more detailed geometric model of the AD-4/ACE experiment was applied. Furthermore, the Fermi–Teller Z-law, which is implemented by default in SHIELD-HIT12A has been shown not to be a good approximation for the capture probability of negative projectiles by nuclei. We investigate other theories which have been developed, and give a better agreement with experimental findings. The consequence of these updates is tested by comparing simulated data with the antiproton depth dose curve in water. It is found that the implementation of these new capture probabilities results in an overestimation of the depth dose curve in the Bragg peak. This can be mitigated by scaling the antiproton collision cross sections, which restores the agreement, but some small deviations still remain. Best agreement is achieved by using the most recent antiproton collision cross sections and the Fermi–Teller Z-law, even if experimental data conclude that the Z-law is inadequately describing annihilation on compounds. We conclude that more experimental cross section data are needed in the lower energy range in order to resolve this contradiction, ideally combined with more rigorous models for annihilation on compounds.

  19. Determination of the 3\\gamma fraction from positron annihilation in mesoporous materials for symmetry violation experiment with J-PET scanner

    CERN Document Server

    Jasińska, B; Wiertel, M; Zaleski, R; Alfs, D; Bednarski, T; Białas, P; Czerwiński, E; Dulski, K; Gajos, A; Głowacz, B; Kamińska, D; Kapłon, Ł; Korcyl, G; Kowalski, P; Kozik, T; Krzemień, W; Kubicz, E; Mohammed, M; Niedźwiecki, Sz; Pałka, M; Raczyński, L; Rudy, Z; Rundel, O; Sharma, N G; Silarski, M; Słomski, A; Strzelecki, A; Wieczorek, A; Wiślicki, W; Zgardzińska, B; Zieliński, M; Moskal, P

    2016-01-01

    Various mesoporous materials were investigated to choose the best material for experiments requiring high yield of long-lived positronium. We found that the fraction of 3\\gamma annihilation determined using \\gamma-ray energy spectra and positron annihilation lifetime spectra (PAL) changed from 20% to 25%. The 3gamma fraction and o-Ps formation probability in the polymer XAD-4 is found to be the largest. Elemental analysis performed using scanning electron microscop (SEM) equipped with energy-dispersive X-ray spectroscop EDS show high purity of the investigated materials.

  20. Positron-molecule interactions: resonant attachment, annihilation, and bound states

    CERN Document Server

    Gribakin, G F; Surko, C M; 10.1103/RevModPhys.82.2557

    2010-01-01

    This article presents an overview of current understanding of the interaction of low-energy positrons with molecules with emphasis on resonances, positron attachment and annihilation. Annihilation rates measured as a function of positron energy reveal the presence of vibrational Feshbach resonances (VFR) for many polyatomic molecules. These resonances lead to strong enhancement of the annihilation rates. They also provide evidence that positrons bind to many molecular species. A quantitative theory of VFR-mediated attachment to small molecules is presented. It is tested successfully for selected molecules (e.g., methyl halides and methanol) where all modes couple to the positron continuum. Combination and overtone resonances are observed and their role is elucidated. In larger molecules, annihilation rates from VFR far exceed those explicable on the basis of single-mode resonances. These enhancements increase rapidly with the number of vibrational degrees of freedom. While the details are as yet unclear, intr...

  1. Coincidence Doppler Broadening of Positron Annihilation Radiation in Fe

    Science.gov (United States)

    do Nascimento, E.; Vanin, V. R.; Maidana, N. L.; Helene, O.

    2013-06-01

    We measured the Doppler broadening annihilation radiation spectrum in Fe, using 22NaCl as a positron source, and two Ge detectors in coincidence arrangement. The two-dimensional coincidence energy spectrum was fitted using a model function that included positron annihilation with the conduction band and 3d electrons, 3s and 3p electrons, and in-flight positron annihilation. Detectors response functions included backscattering and a combination of Compton and pulse pileup, ballistic deficit and shaping effects. The core electrons annihilation intensity was measured as 16.4(3) %, with almost all the remainder assigned to the less bound electrons. The obtained results are in agreement with published theoretical values.

  2. Aspects of meson spectroscopy with N N annihilation

    Energy Technology Data Exchange (ETDEWEB)

    Dover, C.B.

    1990-04-01

    We focus on the potentialities of nucleon-antinucleon ({bar N}N) annihilation as a means of producing new mesonic states. The case for the existence of quasinuclear {bar N}N bound states is discussed in detail. Strong evidence for a 2{sup ++}(0{sup +}) state of this type has been obtained at LEAR in annihilation from the p-wave (L = 1) {bar N}N system, in support of earlier sightings of this object in L = 0 annihilation at Brookhaven. In the next generation of LEAR experiments, the emphasis shifts to the search for mesons containing dynamical excitations of the gluonic field, namely glueballs and hybrids (Q{bar Q}g). We discuss some features of the masses, decay branching ratios and production mechanisms for these states, and suggest particular {bar N}N annihilation channels which are optimal for their discovery. 59 refs., 15 figs.

  3. Positron scattering and annihilation on noble gas atoms

    CERN Document Server

    Green, D G; Gribakin, G F

    2014-01-01

    Positron scattering and annihilation on noble gas atoms below the positronium formation threshold is studied ab initio using many-body theory methods. The many-body theory provides a near-complete understanding of the positron-noble-gas-atom system at these energies and yields accurate numerical results. It accounts for positron-atom and electron-positron correlations, e.g., polarization of the atom by the incident positron and the non-perturbative process of virtual positronium formation. These correlations have a large effect on the scattering dynamics and result in a strong enhancement of the annihilation rates compared to the independent-particle mean-field description. Computed elastic scattering cross sections are found to be in good agreement with recent experimental results and Kohn variational and convergent close-coupling calculations. The calculated values of the annihilation rate parameter $Z_{\\rm eff}$ (effective number of electrons participating in annihilation) rise steeply along the sequence o...

  4. Neutrinos from WIMP annihilations in the Sun including neutrino oscillations

    Energy Technology Data Exchange (ETDEWEB)

    Blennow, Mattias, E-mail: emb@kth.se [Department of Theoretical Physics, School of Engineering Sciences, Royal Institute of Technology (KTH) - AlbaNova University Center, SE-106 91 Stockholm (Sweden); Edsjoe, Joakim, E-mail: edsjo@physto.se [Department of Physics, Stockholm University - AlbaNova University Center, SE-106 91 Stockholm (Sweden); Ohlsson, Tommy, E-mail: tommy@theophys.kth.se [Department of Theoretical Physics, School of Engineering Sciences, Royal Institute of Technology (KTH) - AlbaNova University Center, SE-106 91 Stockholm (Sweden)

    2011-12-15

    The prospects to detect neutrinos from the Sun arising from dark matter annihilations in the core of the Sun are reviewed. Emphasis is placed on new work investigating the effects of neutrino oscillations on the expected neutrino fluxes.

  5. Baryon production in $e^{+}e^{-}$-annihilation at PETRA

    CERN Document Server

    Bartel, Wulfrin; Dittmann, P; Eichler, R; Felst, R; Haidt, Dieter; Krehbiel, H; Meier, K; Naroska, Beate; O'Neill, L H; Steffen, P; Wenninger, Horst; Zhang, Y; Elsen, E E; Helm, M; Petersen, A; Warming, P; Weber, G; Bethke, Siegfried; Drumm, H; Heintze, J; Heinzelmann, G; Hellenbrand, K H; Heuer, R D; Von Krogh, J; Lennert, P; Kawabata, S; Matsumura, H; Nozaki, T; Olsson, J; Rieseberg, H; Wagner, A; Bell, A; Foster, F; Hughes, G; Wriedt, H; Allison, J; Ball, A H; Bamford, G; Barlow, R; Bowdery, C K; Duerdoth, I P; Hassard, J F; King, B T; Loebinger, F K; MacBeth, A A; McCann, H; Mills, H E; Murphy, P G; Prosper, H B; Stephens, K; Clarke, D; Goddard, M C; Marshall, R; Pearce, G F; Kobayashi, T; Komamiya, S; Koshiba, M; Minowa, M; Nozaki, M; Orito, S; Sato, A; Suda, T; Takeda, H; Totsuka, Y; Watanabe, Y; Yamada, S; Yanagisawa, C

    1981-01-01

    Data on p and Lambda production by e/sup +/e/sup -/-annihilation at CM energies between 30 and 36 GeV are presented. Indication for an angular anticorrelation in events with baryon-antibaryon pairs is seen.

  6. Baryon production in e+e--annihilation at PETRA

    International Nuclear Information System (INIS)

    Data on anti p and anti Λ production by e+e--annihilation at CM energies between 30 and 36 GeV are presented. Indication for an angular anticorrelation in events with baryon antibaryon pairs is seen. (orig.)

  7. MÉTODOS DISCRETOS Y CONTINUOS PARA MODELAR LA DENSIDAD DE PROBABILIDAD DE LA VOLATILIDAD ESTOCÁSTICA DE LOS RENDIMIENTOS DE SERIES FINANCIERAS DISCRETE AND CONTINUOUS METHODS FOR MODELING FINANCIAL SERIES YIELDING STOCHASTIC VOLATILITY PROBABILITY DENSITY

    Directory of Open Access Journals (Sweden)

    Carlos Alexánder Grajales Correa

    2007-07-01

    Full Text Available En este trabajo se consideran los rendimientos diarios de un activo financiero con el propósito de modelar y comparar la densidad de probabilidad de la volatilidad estocástica de los retornos. Para tal fin, se proponen los modelos ARCH y sus extensiones, que son en tiempo discreto, así como un modelo empírico de volatilidad estocástica, desarrollado por Paul Wilmott. Para el caso discreto se muestran los modelos que permiten estimar la volatilidad condicional heterocedástica en un instante t del tiempo, t∈[1,T]. En el caso continuo se asocia un proceso de difusión de Itô a la volatilidad estocástica de la serie financiera, lo cual posibilita discretizar dicho proceso y simularlo para obtener densidades de probabilidad empíricas de la volatilidad. Finalmente se ilustran y se comparan los resultados obtenidos con las metodologías expuestas para el caso de las series financieras S&P 500 de EEUU, el Índice de Precios y Cotizaciones de la Bolsa Mexicana de Valores (IPC y el IGBC de Colombia.This work considers daily yields of financial assets in order to model and compare returns stochastic volatility probability density. For such aim, ARCH models and its extensions are proposed - they are in discrete time- as well as an Empirical Stochastic Volatility Model, developed by Paul Wilmott. For the discrete case, models that allow to estimate heteroscedasticity conditional volatility in a time, t, t,t∈[1,T], are shown. In the continuous case, there is an association of an Itô diffusion process to stochastic volatility of the financial series, which allows to write a discretization of this process and to simulate it to obtain empirical probabilistic densities from the volatility. Finally the results are illustrated and compared with methodologies exposed by the case of the financial series S&P 500 of the U.S.A., Index of Prices and Quotations of stock-market Mexican of Values (IPC and IGBC of Colombia.

  8. 基于概率密度演化的渡槽结构抗震分析%Seismic Analysis of Large-scale Aqueduct Structures Based on the Probability Density Evolution Method

    Institute of Scientific and Technical Information of China (English)

    曾波; 邢彦富; 刘章军

    2014-01-01

    Using the orthogonal expansion method of random processes,the non-stationary seismic acceleration process is represented as a linear combination of the standard orthogonal basis func-tions and the standard orthogonal random variables.Then,using the random function,these stand-ard orthogonal random variables in the orthogonal expansion are expressed as an orthogonal func-tion form of the basic random variable.Therefore,this method can use a basic random variable to express the original earthquake ground processes.The orthogonal expansion-random function ap-proach was used to generate 126 representative earthquake samples,and each representative sam-ple was assigned a given probability.The 126 representative earthquake samples were combined with the probability density evolution method of stochastic dynamical systems and random seis-mic responses of large-scale aqueduct structures was investigated.In this study,four cases were considered;aqueduct without water,aqueduct with water in the central trough,aqueduct with wa-ter in a two-side trough,and aqueduct with water in three troughs,and probability information of seismic responses for these cases were obtained.Moreover,using the proposed method,the seis-mic reliability of the aqueduct structures was efficiently calculated.This method provides a new and effective means for precise seismic analysis of large-scale aqueduct structures.%应用随机过程的正交展开方法,将地震动加速度过程展开为标准正交基函数与标准正交随机变量的线性组合形式。在此基础上采用随机函数的思想,将正交展开式中的标准正交随机变量表达为基本随机变量的函数形式,从而实现用一个基本随机变量来表达原地震动过程的目的。结合地震动过程的正交展开-随机函数模型与概率密度演化方法,对某大型渡槽结构进行随机地震反应分析与抗震可靠度计算;重点研究空槽和三槽有水等四种工况下渡槽结构

  9. The Effect of Black Holes in Local Dwarf Spheroidal Galaxies on Gamma-Ray Constraints on Dark Matter Annihilation

    CERN Document Server

    Gonzalez-Morales, Alma X; Queiroz, Farinaldo S

    2014-01-01

    The recent evidence for black holes of intermediate mass in dwarf galaxies motivates the assessment of the resulting effect on the host dark matter density profile, and the consequences for the constraints on the plane of the dark matter annihilation cross section versus mass, stemming from the non-observation of gamma rays from local dwarf spheroidals with the Fermi Large Area Telescope. We compute the density profile using three different prescriptions for the black hole mass associated with a given dwarf galaxy, and taking into account the cutoff to the density from dark matter pair-annihilation. We find that the limits on the dark matter annihilation rate from observations of individual dwarfs are enhanced by factors of a few up to $10^6$, depending on the specific galaxy, on the black hole mass prescription, and on the dark matter particle mass. We estimate limits from combined observations of a sample of 15 dwarfs, for a variety of assumptions on the dwarf black hole mass and on the dark matter density ...

  10. Polymerization of epoxy resins studied by positron annihilation

    International Nuclear Information System (INIS)

    The polymerization process of epoxy resins (bisphenol-A dicyanate) was studied using positron-annihilation spectroscopy. The polymerization from monomer to polymer through a polymerization reaction was followed by positron-annihilation lifetime spectroscopy measurements. Resins kept at curing temperatures (120, 150 and 200 oC) changed form from of powder to a solid through a liquid. The size of the intermolecular spaces of the solid samples increased along with the progress of polymerization. (author)

  11. Effects of Bound States on Dark Matter Annihilation

    OpenAIRE

    An, Haipeng; Wise, Mark B.; Zhang, Yue

    2016-01-01

    We study the impact of bound state formation on dark matter annihilation rates in models where dark matter interacts via a light mediator, the dark photon. We derive the general cross section for radiative capture into all possible bound states, and point out its non-trivial dependence on the dark matter velocity and the dark photon mass. For indirect detection, our result shows that dark matter annihilation inside bound states can play an important role in enhancing signal rates over the rat...

  12. Breit-Wigner Enhancement of Dark Matter Annihilation

    Energy Technology Data Exchange (ETDEWEB)

    Ibe, Masahiro; /SLAC; Murayama, Hitoshi; /Tokyo U., IPMU /UC, Berkeley /LBL, Berkeley; Yanagida, T.T.; /Tokyo U. /Tokyo U., IPMU

    2009-06-19

    We point out that annihilation of dark matter in the galactic halo can be enhanced relative to that in the early universe due to a Breit-Wigner tail, if the dark matter annihilates through a pole just below the threshold. This provides a new explanation to the 'boost factor' which is suggested by the recent data of the PAMELA, ATIC and PPB-BETS cosmic-ray experiments.

  13. Raman Cooling of Solids through Photonic Density of States Engineering

    CERN Document Server

    Chen, Yin-Chung

    2015-01-01

    The laser cooling of vibrational states of solids has been achieved through photoluminescence in rare-earth elements, optical forces in optomechanics, and the Brillouin scattering light-sound interaction. The net cooling of solids through spontaneous Raman scattering, and laser refrigeration of indirect band gap semiconductors, both remain unsolved challenges. Here, we analytically show that photonic density of states (DoS) engineering can address the two fundamental requirements for achieving spontaneous Raman cooling: suppressing the dominance of Stokes (heating) transitions, and the enhancement of anti-Stokes (cooling) efficiency beyond the natural optical absorption of the material. We develop a general model for the DoS modification to spontaneous Raman scattering probabilities, and elucidate the necessary and minimum condition required for achieving net Raman cooling. With a suitably engineered DoS, we establish the enticing possibility of refrigeration of intrinsic silicon by annihilating phonons from ...

  14. 高斯混合粒子PHD滤波被动测角多目标跟踪%Gaussian mixture particle probability hypothesis density based passive bearings-only multi-target tracking

    Institute of Scientific and Technical Information of China (English)

    张俊根; 姬红兵

    2011-01-01

    为解决目标数未知或随时间变化的多日标跟踪问题,通常将多目标状态和脱测数据表示成随机集形式,并通过递推计算目标状态联合分布的概率假设密度(PHD)来完成,然而,对于被动测角的非线性跟踪问题,PHD无法获得闭合解,为此挺出一种新的高斯混合粒了PHD算法,该算法利用高斯混合近似PHD,以避免用聚类确定目标状态,并采用拟蒙特卡罗(QMC)积分方法计算日标状态的预测和更新分布,仿真结果验证了所提出算法的有效性.%When the number of targets is unknown or varies with time, multi-target state and measurements are represented as random sets and the multi-target tracking problem is addressed by calculating the probability hypothesis density(PHD) of the.joint distribution, recorsively. However, PHD can not provide a closed-form solution to the nonlinear problem occurred in the passive bearings-only multi-target tracking system. A new Gaussian mixture particle PHD(GMPPHD) filter is presented in the paper. The PHD is approximated by a mixture of Gaussians, which avoids clustering to determine target states. And Quasi-Monte Carlo integration method is used for approximating the prediction and update distributions of target states.Simulation results show the effectiveness of the proposed algorithm.

  15. The Characterization of the Gamma-Ray Signal from the Central Milky Way: A Compelling Case for Annihilating Dark Matter

    Energy Technology Data Exchange (ETDEWEB)

    Daylan, Tansu [Harvard Univ., Cambridge, MA (United States); Finkbeiner, Douglas P. [Harvard-Smithsonian Center, Cambridge, MA (United States); Hooper, Dan [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Linden, Tim [Univ. of Illinois at Chicago, Chicago, IL (United States); Portillo, Stephen K. N. [Harvard-Smithsonian Center, Cambridge, MA (United States); Rodd, Nicholas L. [Massachusetts Institute of Technology, Boston, MA (United States); Slatyer, Tracy R. [Institute for Advanced Study, Princeton, NJ (United States)

    2014-02-26

    Past studies have identified a spatially extended excess of ~1-3 GeV gamma rays from the region surrounding the Galactic Center, consistent with the emission expected from annihilating dark matter. We revisit and scrutinize this signal with the intention of further constraining its characteristics and origin. By applying cuts to the Fermi event parameter CTBCORE, we suppress the tails of the point spread function and generate high resolution gamma-ray maps, enabling us to more easily separate the various gamma-ray components. Within these maps, we find the GeV excess to be robust and highly statistically significant, with a spectrum, angular distribution, and overall normalization that is in good agreement with that predicted by simple annihilating dark matter models. For example, the signal is very well fit by a 31-40 GeV dark matter particle annihilating to b quarks with an annihilation cross section of sigma v = (1.4-2.0) x 10^-26 cm^3/s (normalized to a local dark matter density of 0.3 GeV/cm^3). Furthermore, we confirm that the angular distribution of the excess is approximately spherically symmetric and centered around the dynamical center of the Milky Way (within ~0.05 degrees of Sgr A*), showing no sign of elongation along or perpendicular to the Galactic Plane. The signal is observed to extend to at least 10 degrees from the Galactic Center, disfavoring the possibility that this emission originates from millisecond pulsars.

  16. The Characterization of the Gamma-Ray Signal from the Central Milky Way: A Compelling Case for Annihilating Dark Matter

    CERN Document Server

    Daylan, Tansu; Hooper, Dan; Linden, Tim; Portillo, Stephen K N; Rodd, Nicholas L; Slatyer, Tracy R

    2014-01-01

    Past studies have identified a spatially extended excess of ~1-3 GeV gamma rays from the region surrounding the Galactic Center, consistent with the emission expected from annihilating dark matter. We revisit and scrutinize this signal with the intention of further constraining its characteristics and origin. By applying cuts to the Fermi event parameter CTBCORE, we suppress the tails of the point spread function and generate high resolution gamma-ray maps, enabling us to more easily separate the various gamma-ray components. Within these maps, we find the GeV excess to be robust and highly statistically significant, with a spectrum, angular distribution, and overall normalization that is in good agreement with that predicted by simple annihilating dark matter models. For example, the signal is very well fit by a 31-40 GeV dark matter particle annihilating to b quarks with an annihilation cross section of sigma v = (1.4-2.0) x 10^-26 cm^3/s (normalized to a local dark matter density of 0.3 GeV/cm^3). Furtherm...

  17. Search for Dark Matter Annihilation Signals from the Fornax Galaxy Cluster with H.E.S.S

    CERN Document Server

    Abramowski, A; Aharonian, F; Akhperjanian, A G; Anton, G; Balzer, A; Barnacka, A; de Almeida, U Barres; Becherini, Y; Becker, J; Behera, B; Bernlöhr, K; Birsin, E; Biteau, J; Bochow, A; Boisson, C; Bolmont, J; Bordas, P; Brucker, J; Brun, F; Brun, P; Bulik, T; Büsching, I; Carrigan, S; Casanova, S; Cerruti, M; Chadwick, P M; Charbonnier, A; Chaves, R C G; Cheesebrough, A; Clapson, A C; Coignet, G; Cologna, G; Conrad, J; Dalton, M; Daniel, M K; Davids, I D; Degrange, B; Deil, C; Dickinson, H J; Djannati-Ataï, A; Domainko, W; Drury, L O'C; Dubus, G; Dutson, K; Dyks, J; Dyrda, M; Egberts, K; Eger, P; Espigat, P; Fallon, L; Farnier, C; Fegan, S; Feinstein, F; Fernandes, M V; Fiasson, A; Fontaine, G; Förster, A; Füßling, M; Gallant, Y A; Gast, H; Gérard, L; Gerbig, D; Giebels, B; Glicenstein, J F; Glück, B; Goret, P; Göring, D; Häffner, S; Hague, J D; Hampf, D; Hauser, M; Heinz, S; Heinzelmann, G; Henri, G; Hermann, G; Hinton, J A; Hoffmann, A; Hofmann, W; Hofverberg, P; Holler, M; Horns, D; Jacholkowska, A; de Jager, O C; Jahn, C; Jamrozy, M; Jung, I; Kastendieck, M A; Katarzyński, K; Katz, U; Kaufmann, S; Keogh, D; Khangulyan, D; Khélifi, B; Klochkov, D; Kluźniak, W; Kneiske, T; Komin, Nu; Kosack, K; Kossakowski, R; Laffon, H; Lamanna, G; Lennarz, D; Lohse, T; Lopatin, A; Lu, C -C; Marandon, V; Marcowith, A; Masbou, J; Maurin, D; Maxted, N; Mayer, M; McComb, T J L; Medina, M C; Méhault, J; Moderski, R; Moulin, E; Naumann, C L; Naumann-Godo, M; de Naurois, M; Nedbal, D; Nekrassov, D; Nguyen, N; Nicholas, B; Niemiec, J; Nolan, S J; Ohm, S; Wilhelmi, E de Oña; Opitz, B; Ostrowski, M; Oya, I; Panter, M; Arribas, M Paz; Pedaletti1, G; Pelletier, G; Petrucci, P -O; Pita, S; Pühlhofer, G; Punch, M; Quirrenbach, A; Raue, M; Rayner, S M; Reimer, A; Reimer, O; Renaud, M; Reyes, R de los; Rieger, F; Ripken, J; Rob, L; Rosier-Lees, S; Rowell, G; Rudak, B; Rulten, C B; Ruppel, J; Sahakian, V; Sanchez, D A; Santangelo, A; Schlickeiser, R; Schöck, F M; Schulz, A; Schwanke, U; Schwarzburg, S; Schwemmer, S; Sheidaei, F; Skilton, J L; Sol, H; Spengler, G; Stawarz, Ł; Steenkamp, R; Stegmann, C; Stinzing, F; Stycz, K; Sushch, I; Szostek, A; Tavernet, J -P; Terrier, R; Tluczykont, M; Valerius, K; van Eldik, C; Vasileiadis, G; Venter, C; Vialle, J P; Viana, A; Vincent, P; Völk, H J; Volpe, F; Vorobiov, S; Vorster, M; Wagner, S J; Ward, M; White, R; Wierzcholska, A; Zacharias, M; Zajczyk, A; Zdziarski, A A; Zech, A; Zechlin, H -S

    2012-01-01

    The Fornax galaxy cluster was observed with the High Energy Stereoscopic System (H.E.S.S.) for a total live time of 14.5 hours, searching for very-high-energy (VHE, E>100 GeV) gamma-rays from dark matter (DM) annihilation. No significant signal was found in searches for point-like and extended emissions. Using several models of the DM density distribution, upper limits on the DM velocity-weighted annihilation cross-section as a function of the DM particle mass are derived. Constraints are derived for different DM particle models, such as those arising from Kaluza-Klein and supersymmetric models. Various annihilation final states are considered. Possible enhancements of the DM annihilation gamma-ray flux, due to DM substructures of the DM host halo, or from the Sommerfeld effect, are studied. Additional gamma-ray contributions from internal bremsstrahlung and inverse Compton radiation are also discussed. For a DM particle mass of 1 TeV, the exclusion limits at 95% of confidence level reach values of ~ 10^-23...

  18. The effect of the nuclear Coulomb field on atomic ionization at positron-electron annihilation in β+- decay

    Directory of Open Access Journals (Sweden)

    Fedotkin Sergey

    2015-01-01

    Full Text Available We consider the process of the annihilation of a positron emitted at β+- decay and a K-electron of the daughter atom. A part of energy during this process is passed to another K- electron and it leaves the atom. The influence of the Coulomb field on the positron and the ejected electron is considered. It was calculated the probability of this process for an atom with arbitrary Z is calculated. For the nucleus Ti the effect of the Coulomb field essentially increases the probability of the considered process.

  19. Sommerfeld enhancement of DM annihilation: resonance structure, freeze-out and CMB spectral bound

    DEFF Research Database (Denmark)

    Hannestad, Steen; Bülow, Thomas Tram

    2011-01-01

    In the last few years there has been some interest in WIMP Dark Matter models featuring a velocity dependent cross section through the Sommerfeld enhancement mechanism, which is a non-relativistic effect due to massive bosons in the dark sector. In the first part of this article, we find analytic....... In the second part of the article we perform a detailed computation of the Dark Matter relic density for models having Sommerfeld enhancement by solving the Boltzmann equation numerically. We calculate the expected distortions of the CMB blackbody spectrum from WIMP annihilations and compare these to the bounds...

  20. Annihilation of antiproton on deuteron at rest

    International Nuclear Information System (INIS)

    The system of Faddeev equations for amplitudes of anti pD iteraction at rest accounting for higher partial anti NN waves is derived. From its solution the total and elastic anti pD cross sections are calculated. Predictions for the missing-mass spectrum in the anti pD annihilation are made. The P-wave anti NN states give small contribution to the anti pD cross section at rest, the theoretical value of the latter being less than the experimental cross section extrapolated to the threshold. Let us emphasize that the total anti pD cross section depending weakly on the radii of anti NN interactions is sensitive to the values of the anti NN scattering lengths. Experimental data for anti pD cross sections at rest can be obtained only by extrapolation procedure. Henceforth it is very important to investigate the anti pD interactions at low but non-zero momenta where the direct comparison to the experiment is possible

  1. Positron Annihilation in Medical Substances of Insulin

    Science.gov (United States)

    Pietrzak, R.; Szatanik, R.

    2005-05-01

    Positrons lifetimes were measured in medical substances of insulin (human and animal), differing as far as the degree of purity and time of their activity in the organism are concerned. In all of the cases the spectrum of positron lifetime was distributed into three components, with the long-life component ranging from 1.8 to 2.08 ns and the intensity taking on values from 18 to 24%. Making use of Tao-Eldrup model, the average radius of the free volume, in which o-Ps annihilated, and the degree of filling in the volume were determined. It was found that the value of the long-life component for human insulin is higher than that of animal insulin. Moreover, the value of this component clearly depends on the manner of purification of the insulin. It was also noticed that there occurs a correlation between the value of this component and the time after which it begins to be active in the organism, as well as the total time of its activity.

  2. On the Annihilation Rate of WIMPs

    CERN Document Server

    Baumgart, Matthew; Vaidya, Varun

    2014-01-01

    We develop a formalism that allows one to systematically calculate the WIMP annihilation rate into gamma rays whose energy far exceeds the weak scale. A factorization theorem is presented which separates the radiative corrections stemming from initial state potential interactions from loops involving the final state. This separation allows us to go beyond the fixed the order calculation, which is polluted by large infrared logarithms. For the case of Majorana WIMPs transforming in the adjoint representation of SU(2), we present the result for the resummed rate at leading double log accuracy in terms of two initial state partial wave matrix elements and one hard matching coefficient. For a given model, one may calculate the cross section by calculating the tree level matching coefficient and determining the value of a local four fermion operator. We find that the effects of resummation can be as large as 100% for a 20 TeV WIMP. The generalization of the formalism to other types of WIMPs is discussed.

  3. Positron annihilation in medical substances of insulin

    International Nuclear Information System (INIS)

    Positrons lifetimes were measured in medical substances of insulin (human and animal), differing as far as the degree of purity and time of their activity in the organism are concerned. In all of the cases the spectrum of positron lifetime was distributed into three components, with the long-life component ranging from 1.8 to 2.08 ns and the intensity taking on values from 18% to 24%. Making use of Tao-Eldrup model, the average radius of the free volume, in which o-Ps annihilated, and the degree of filling in the volume were determined. It was found that the value of the long-life component for human insulin is higher than that of animal insulin. Moreover, the value of this component clearly depends on the manner of purification of the insulin. It was also noticed that there occurs a correlation between the value of this component and the time after which it begins to be active in the organism, as well as the total time of its activity. (author)

  4. Antimatter annihilation detection with AEgIS

    CERN Document Server

    Gligorova, Angela

    2015-01-01

    AE ̄ gIS (Antimatter Experiment: Gravity, Interferometry, Spectroscopy) is an antimatter exper- iment based at CERN, whose primary goal is to carry out the first direct measurement of the Earth’s gravitational acceleration on antimatter. A precise measurement of antimatter gravity would be the first precision test of the Weak Equivalence Principle for antimatter. The principle of the experiment is based on the formation of antihydrogen through a charge exchange reaction between laser excited (Rydberg) positronium and ultra-cold antiprotons. The antihydrogen atoms will be accelerated by an inhomogeneous electric field (Stark acceleration) to form a pulsed cold beam. The free fall of the antihydrogen due to Earth’s gravity will be measured using a moiré de- flectometer and a hybrid position detector. This detector is foreseen to consist of an active silicon part, where the annihilation of antihydrogen takes place, followed by an emulsion part coupled to a fiber time-of-flight detector. This overview prese...

  5. Elements of probability theory

    CERN Document Server

    Rumshiskii, L Z

    1965-01-01

    Elements of Probability Theory presents the methods of the theory of probability. This book is divided into seven chapters that discuss the general rule for the multiplication of probabilities, the fundamental properties of the subject matter, and the classical definition of probability. The introductory chapters deal with the functions of random variables; continuous random variables; numerical characteristics of probability distributions; center of the probability distribution of a random variable; definition of the law of large numbers; stability of the sample mean and the method of moments

  6. Evaluating probability forecasts

    CERN Document Server

    Lai, Tze Leung; Shen, David Bo; 10.1214/11-AOS902

    2012-01-01

    Probability forecasts of events are routinely used in climate predictions, in forecasting default probabilities on bank loans or in estimating the probability of a patient's positive response to treatment. Scoring rules have long been used to assess the efficacy of the forecast probabilities after observing the occurrence, or nonoccurrence, of the predicted events. We develop herein a statistical theory for scoring rules and propose an alternative approach to the evaluation of probability forecasts. This approach uses loss functions relating the predicted to the actual probabilities of the events and applies martingale theory to exploit the temporal structure between the forecast and the subsequent occurrence or nonoccurrence of the event.

  7. Non-Archimedean Probability

    CERN Document Server

    Benci, Vieri; Wenmackers, Sylvia

    2011-01-01

    We propose an alternative approach to probability theory closely related to the framework of numerosity theory: non-Archimedean probability (NAP). In our approach, unlike in classical probability theory, all subsets of an infinite sample space are measurable and zero- and unit-probability events pose no particular epistemological problems. We use a non-Archimedean field as the range of the probability function. As a result, the property of countable additivity in Kolmogorov's axiomatization of probability is replaced by a different type of infinite additivity.

  8. Positron annihilation study of spin transition in metal complex: [Fe(Phen)2(NCS)2] (Paper No. HF-04)

    International Nuclear Information System (INIS)

    In this paper, the use of Doppler-broadened annihilation lineshape as a technique to probe spin transition in a classical spin transition complex Fe(Phen)2(NCS)2 in solid state is demonstrated. This technique is simple in application compared to other established techniques such as Moessbauer, susceptibility measurement, ESR etc. In addition it provides correlated information on local electron density and momentum distribution. (author)

  9. Predicting the neutralino relic density in the MSSM more precisely

    CERN Document Server

    Harz, Julia; Klasen, Michael; Kovařík, Karol; Steppeler, Patrick

    2016-01-01

    The dark matter relic density being a powerful observable to constrain models of new physics, the recent experimental progress calls for more precise theoretical predictions. On the particle physics side, improvements are to be made in the calculation of the (co)annihilation cross-section of the dark matter particle. We present the project DM@NLO which aims at calculating the neutralino (co)annihilation cross-section in the MSSM including radiative corrections in QCD. In the present document, we briefly review selected results for different (co)annihilation processes. We then discuss the estimation of the associated theory uncertainty obtained by varying the renormalization scale. Finally, perspectives are discussed.

  10. Probability on real Lie algebras

    CERN Document Server

    Franz, Uwe

    2016-01-01

    This monograph is a progressive introduction to non-commutativity in probability theory, summarizing and synthesizing recent results about classical and quantum stochastic processes on Lie algebras. In the early chapters, focus is placed on concrete examples of the links between algebraic relations and the moments of probability distributions. The subsequent chapters are more advanced and deal with Wigner densities for non-commutative couples of random variables, non-commutative stochastic processes with independent increments (quantum Lévy processes), and the quantum Malliavin calculus. This book will appeal to advanced undergraduate and graduate students interested in the relations between algebra, probability, and quantum theory. It also addresses a more advanced audience by covering other topics related to non-commutativity in stochastic calculus, Lévy processes, and the Malliavin calculus.

  11. Introduction to probability

    CERN Document Server

    Roussas, George G

    2006-01-01

    Roussas's Introduction to Probability features exceptionally clear explanations of the mathematics of probability theory and explores its diverse applications through numerous interesting and motivational examples. It provides a thorough introduction to the subject for professionals and advanced students taking their first course in probability. The content is based on the introductory chapters of Roussas's book, An Intoduction to Probability and Statistical Inference, with additional chapters and revisions. Written by a well-respected author known for great exposition an

  12. Interpretations of probability

    CERN Document Server

    Khrennikov, Andrei

    2009-01-01

    This is the first fundamental book devoted to non-Kolmogorov probability models. It provides a mathematical theory of negative probabilities, with numerous applications to quantum physics, information theory, complexity, biology and psychology. The book also presents an interesting model of cognitive information reality with flows of information probabilities, describing the process of thinking, social, and psychological phenomena.

  13. Non-Archimedean Probability

    NARCIS (Netherlands)

    Benci, Vieri; Horsten, Leon; Wenmackers, Sylvia

    2013-01-01

    We propose an alternative approach to probability theory closely related to the framework of numerosity theory: non-Archimedean probability (NAP). In our approach, unlike in classical probability theory, all subsets of an infinite sample space are measurable and only the empty set gets assigned prob

  14. Dependent Probability Spaces

    Science.gov (United States)

    Edwards, William F.; Shiflett, Ray C.; Shultz, Harris

    2008-01-01

    The mathematical model used to describe independence between two events in probability has a non-intuitive consequence called dependent spaces. The paper begins with a very brief history of the development of probability, then defines dependent spaces, and reviews what is known about finite spaces with uniform probability. The study of finite…

  15. Laboratory-Tutorial activities for teaching probability

    CERN Document Server

    Wittmann, M C; Morgan, J T; Feeley, Roger E.; Morgan, Jeffrey T.; Wittmann, Michael C.

    2006-01-01

    We report on the development of students' ideas of probability and probability density in a University of Maine laboratory-based general education physics course called Intuitive Quantum Physics. Students in the course are generally math phobic with unfavorable expectations about the nature of physics and their ability to do it. We describe a set of activities used to teach concepts of probability and probability density. Rudimentary knowledge of mechanics is needed for one activity, but otherwise the material requires no additional preparation. Extensions of the activities include relating probability density to potential energy graphs for certain "touchstone" examples. Students have difficulties learning the target concepts, such as comparing the ratio of time in a region to total time in all regions. Instead, they often focus on edge effects, pattern match to previously studied situations, reason about necessary but incomplete macroscopic elements of the system, use the gambler's fallacy, and use expectati...

  16. Philosophy and probability

    CERN Document Server

    Childers, Timothy

    2013-01-01

    Probability is increasingly important for our understanding of the world. What is probability? How do we model it, and how do we use it? Timothy Childers presents a lively introduction to the foundations of probability and to philosophical issues it raises. He keeps technicalities to a minimum, and assumes no prior knowledge of the subject. He explains the main interpretations of probability-frequentist, propensity, classical, Bayesian, and objective Bayesian-and uses stimulatingexamples to bring the subject to life. All students of philosophy will benefit from an understanding of probability,

  17. Introduction to probability models

    CERN Document Server

    Ross, Sheldon M

    2006-01-01

    Introduction to Probability Models, Tenth Edition, provides an introduction to elementary probability theory and stochastic processes. There are two approaches to the study of probability theory. One is heuristic and nonrigorous, and attempts to develop in students an intuitive feel for the subject that enables him or her to think probabilistically. The other approach attempts a rigorous development of probability by using the tools of measure theory. The first approach is employed in this text. The book begins by introducing basic concepts of probability theory, such as the random v

  18. Probability and Quantum Paradigms: the Interplay

    Science.gov (United States)

    Kracklauer, A. F.

    2007-12-01

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a few details, this variant is appealing in its reliance on well tested concepts and technology.

  19. Molecular contingencies: reinforcement probability.

    Science.gov (United States)

    Hale, J M; Shimp, C P

    1975-11-01

    Pigeons obtained food by responding in a discrete-trials two-choice probability-learning experiment involving temporal stimuli. A given response alternative, a left- or right-key peck, had 11 associated reinforcement probabilities within each session. Reinforcement probability for a choice was an increasing or a decreasing function of the time interval immediately preceding the choice. The 11 equiprobable temporal stimuli ranged from 1 to 11 sec in 1-sec classes. Preference tended to deviate from probability matching in the direction of maximizing; i.e., the percentage of choices of the preferred response alternative tended to exceed the probability of reinforcement for that alternative. This result was qualitatively consistent with probability-learning experiments using visual stimuli. The result is consistent with a molecular analysis of operant behavior and poses a difficulty for molar theories holding that local variations in reinforcement probability may safely be disregarded in the analysis of behavior maintained by operant paradigms. PMID:16811883

  20. Partial wave analyses of antiproton-proton annihilations in flight

    Energy Technology Data Exchange (ETDEWEB)

    Pychy, Julian; Koch, Helmut; Kopf, Bertram; Wiedner, Ulrich [Institut fuer Experimentalphysik I, Ruhr-Universitaet Bochum (Germany)

    2015-07-01

    To investigate important aspects for the upcoming PANDA experiment, partial wave analyses (PWA) of anti pp-annihilation processes are carried out using data from the Crystal Barrel (LEAR) experiment. A coupled channel analysis of the three reactions resulting in the final states K{sup +}K{sup -}π{sup 0}, π{sup 0}π{sup 0}η and π{sup 0}ηη at a beam momentum of 900 MeV/c is currently in progress. Preliminary results on the determination of resonance contributions and of the spin density matrix (SDM) of different light mesons are presented. The elements of the SDM provide important information about the production process. Furthermore, results of the analyses of the channels ωπ{sup 0}, ωπ{sup 0}η and π{sup +}π{sup -}π{sup 0}π{sup 0} are discussed. These studies are focused on the determination of the contributing angular momenta of the anti pp-system as well as of the SDM of the ω meson. Significant spin-alignment effects depending on the production angle are visible here. These results are compared with those for the φ(1020) in the K{sup +}K{sup -}π{sup 0} channel. All analyses have been performed using PAWIAN, a common, object-oriented and easy-to-use PWA software that is being developed at the Ruhr-Universitaet Bochum. This presentation summarizes recent activities of the Crystal Barrel (LEAR) Collaboration.

  1. Contributions to cosmic reionization from dark matter annihilation and decay

    Science.gov (United States)

    Liu, Hongwan; Slatyer, Tracy R.; Zavala, Jesús

    2016-09-01

    Dark matter annihilation or decay could have a significant impact on the ionization and thermal history of the universe. In this paper, we study the potential contribution of dark matter annihilation (s -wave- or p -wave-dominated) or decay to cosmic reionization, via the production of electrons, positrons and photons. We map out the possible perturbations to the ionization and thermal histories of the universe due to dark matter processes, over a broad range of velocity-averaged annihilation cross sections/decay lifetimes and dark matter masses. We have employed recent numerical studies of the efficiency with which annihilation/decay products induce heating and ionization in the intergalactic medium, and in this work extended them down to a redshift of 1 +z =4 for two different reionization scenarios. We also improve on earlier studies by using the results of detailed structure formation models of dark matter haloes and subhaloes that are consistent with up-to-date N -body simulations, with estimates on the uncertainties that originate from the smallest scales. We find that for dark matter models that are consistent with experimental constraints, a contribution of more than 10% to the ionization fraction at reionization is disallowed for all annihilation scenarios. Such a contribution is possible only for decays into electron/positron pairs, for light dark matter with mass mχ≲100 MeV , and a decay lifetime τχ˜1 024- 1 025 s .

  2. The Isotropic Radio Background and Annihilating Dark Matter

    Energy Technology Data Exchange (ETDEWEB)

    Hooper, Dan [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Belikov, Alexander V. [Institut d' Astrophysique (France); Jeltema, Tesla E. [Univ. of California, Santa Cruz, CA (United States); Linden, Tim [Univ. of California, Santa Cruz, CA (United States); Profumo, Stefano [Univ. of California, Santa Cruz, CA (United States); Slatyer, Tracy R. [Princeton Univ., Princeton, NJ (United States)

    2012-11-01

    Observations by ARCADE-2 and other telescopes sensitive to low frequency radiation have revealed the presence of an isotropic radio background with a hard spectral index. The intensity of this observed background is found to exceed the flux predicted from astrophysical sources by a factor of approximately 5-6. In this article, we consider the possibility that annihilating dark matter particles provide the primary contribution to the observed isotropic radio background through the emission of synchrotron radiation from electron and positron annihilation products. For reasonable estimates of the magnetic fields present in clusters and galaxies, we find that dark matter could potentially account for the observed radio excess, but only if it annihilates mostly to electrons and/or muons, and only if it possesses a mass in the range of approximately 5-50 GeV. For such models, the annihilation cross section required to normalize the synchrotron signal to the observed excess is sigma v ~ (0.4-30) x 10^-26 cm^3/s, similar to the value predicted for a simple thermal relic (sigma v ~ 3 x 10^-26 cm^3/s). We find that in any scenario in which dark matter annihilations are responsible for the observed excess radio emission, a significant fraction of the isotropic gamma ray background observed by Fermi must result from dark matter as well.

  3. CMB Constraints On The Thermal WIMP Annihilation Cross Section

    CERN Document Server

    Steigman, Gary

    2015-01-01

    A thermal relic, often referred to as a weakly interacting massive particle (WIMP),is a particle produced during the early evolution of the Universe whose relic abundance (e.g., at present) depends only on its mass and its thermally averaged annihilation cross section (annihilation rate factor) sigma*v_ann. Late time WIMP annihilation has the potential to affect the cosmic microwave background (CMB) power spectrum. Current observational constraints on the absence of such effects provide bounds on the mass and the annihilation cross section of relic particles that may, but need not be dark matter candidates. For a WIMP that is a dark matter candidate, the CMB constraint sets an upper bound to the annihilation cross section, leading to a lower bound to their mass that depends on whether or not the WIMP is its own antiparticle. For a self-conjugate WIMP, m_min = 50f GeV, where f is an electromagnetic energy efficiency factor. For a non self-conjugate WIMP, the minimum mass is a factor of two larger. For a WIMP t...

  4. The Effects of Dark Matter Annihilation on Cosmic Reionization

    Energy Technology Data Exchange (ETDEWEB)

    Kaurov, Alexander A. [Chicago U., Astron. Astrophys. Ctr.; Hooper, Dan [Chicago U., EFI; Gnedin, Nickolay Y. [Chicago U., KICP

    2015-12-01

    We revisit the possibility of constraining the properties of dark matter (DM) by studying the epoch of cosmic reionization. Previous studies have shown that DM annihilation was unlikely to have provided a large fraction of the photons that ionized the universe, but instead played a subdominant role relative to stars and quasars. The DM, however, begins to efficiently annihilate with the formation of primordial microhalos at $z\\sim100-200$, much earlier than the formation of the first stars. Therefore, if DM annihilation ionized the universe at even the percent level over the interval $z \\sim 20-100$, it can leave a significant imprint on the global optical depth, $\\tau$. Moreover, we show that cosmic microwave background (CMB) polarization data and future 21 cm measurements will enable us to more directly probe the DM contribution to the optical depth. In order to compute the annihilation rate throughout the epoch of reionization, we adopt the latest results from structure formation studies and explore the impact of various free parameters on our results. We show that future measurements could make it possible to place constraints on the dark matter's annihilation cross section that are at a level comparable to those obtained from the observations of dwarf galaxies, cosmic ray measurements, and studies of recombination.

  5. Recent Developments in Applied Probability and Statistics

    CERN Document Server

    Devroye, Luc; Kohler, Michael; Korn, Ralf

    2010-01-01

    This book presents surveys on recent developments in applied probability and statistics. The contributions include topics such as nonparametric regression and density estimation, option pricing, probabilistic methods for multivariate interpolation, robust graphical modelling and stochastic differential equations. Due to its broad coverage of different topics the book offers an excellent overview of recent developments in applied probability and statistics.

  6. Probability of Failure in Random Vibration

    DEFF Research Database (Denmark)

    Nielsen, Søren R.K.; Sørensen, John Dalsgaard

    1988-01-01

    Close approximations to the first-passage probability of failure in random vibration can be obtained by integral equation methods. A simple relation exists between the first-passage probability density function and the distribution function for the time interval spent below a barrier before out...

  7. Photon from the annihilation process with CGC in the $pA$ collision

    CERN Document Server

    Benic, Sanjin

    2016-01-01

    We discuss the photon production in the $pA$ collision in a framework of the color glass condensate (CGC). We work in a regime where the color density $\\rho_A$ of the nucleus is large enough to justify the CGC treatment, while soft gluons in the proton are dominant over quarks but do not yet belong to the CGC regime. In this semi-CGC regime for the proton, we can still perform a systematic expansion in powers of the color density $\\rho_p$ of the proton. The leading-order contributions to the photon production appear from the Bremsstrahlung and the annihilation processes involving quarks from a gluon sourced by $\\rho_p$. We analytically derive an expression for the annihilation contribution to the photon production rate and numerically find that a thermal exponential form gives the best fit with an effective temperature $\\sim 0.5Q_s$ where $Q_s$ is the saturation momentum of the nucleus.

  8. Gamma-ray constraints on dark-matter annihilation to electroweak gauge and Higgs bosons

    CERN Document Server

    Fedderke, Michael A; Lin, Tongyan; Wang, Lian-Tao

    2014-01-01

    Dark-matter annihilation into electroweak gauge and Higgs bosons results in $\\gamma$-ray emission. We use observational upper limits on the fluxes of both line and continuum $\\gamma$-rays from the Milky Way Galactic Center and from Milky Way dwarf companion galaxies to set exclusion limits on allowed dark-matter masses. (Generally, Galactic Center $\\gamma$-ray line search limits from the Fermi-LAT and the H.E.S.S. experiments are most restrictive.) Our limits apply under the following assumptions: a) the dark matter species is a cold thermal relic with present mass density equal to the measured dark-matter density of the universe; b) dark-matter annihilation to standard-model particles is described in the non-relativistic limit by a single effective operator ${\\cal O} \\propto J_{DM}\\cdot J_{SM}$, where $J_{DM}$ is a standard-model singlet current consisting of dark-matter fields (Dirac fermions or complex scalars), and $J_{SM}$ is a standard-model singlet current consisting of electroweak gauge and Higgs boso...

  9. Dark matter annihilation with s-channel internal Higgsstrahlung

    Science.gov (United States)

    Kumar, Jason; Liao, Jiajun; Marfatia, Danny

    2016-08-01

    We study the scenario of fermionic dark matter that annihilates to standard model fermions through an s-channel axial vector mediator. We point out that the well-known chirality suppression of the annihilation cross section can be alleviated by s-channel internal Higgsstrahlung. The shapes of the cosmic ray spectra are identical to that of t-channel internal Higgsstrahlung in the limit of a heavy mediating particle. Unlike the general case of t-channel bremsstrahlung, s-channel Higgsstrahlung can be the dominant annihilation process even for Dirac dark matter. Since the s-channel mediator can be a standard model singlet, collider searches for the mediator are easily circumvented.

  10. Positron annihilation spectroscopy applied to silicon-based materials

    CERN Document Server

    Taylor, J W

    2000-01-01

    deposition on silicon substrates has been examined. The systematic correlations observed between the nitrogen content of the films and both the fitted Doppler parameters and the positron diffusion lengths are discussed in detail. Profiling measurements of silicon nitride films deposited on silicon substrates and subsequently implanted with silicon ions at a range of fluences were also performed. For higher implantation doses, damage was seen to extend beyond the film layers and into the silicon substrates. Subsequent annealing of two of the samples was seen to have a significant influence on the nature of the films. Positron annihilation spectroscopy, in conjunction with a variable-energy positron beam, has been employed to probe non-destructively the surface and near-surface regions of a selection of technologically important silicon-based samples. By measuring the Doppler broadening of the 511 keV annihilation lineshape, information on the positrons' microenvironment prior to annihilation may be obtained. T...

  11. [Positron annihilation lifetime spectrometry (PALS) and its pharmaceutical applications].

    Science.gov (United States)

    Sebe, István; Szabó, Barnabás; Zelkó, Romána

    2012-01-01

    PALS is one of the most widely used "nuclear probe" techniques for the tracking of the structural characteristics of materials. The method is based on the matter-energy equivalence principle recognized by Einstein: the electrons and positrons as particle-antiparticle pairs disappear in mutual destruction of particles, they annihilate with high-energy gamma-radiation, thus "particle-energy transition" occurs. The properties of the resulting radiation exactly correspond to the relevant properties of the electron and positron preceding the annihilation. Since electrons occur in all types of materials, the phenomenon of positron annihilation can play in any environment; consequently the method can be used for the analysis of each type of materials (crystalline and amorphous, organic and inorganic, biotic and abiotic). The present paper provides an overview of the theoretical physical background, the practical realization and evaluation of methods, their limitations, and summarizes the pharmaceutical applications published in recent years. PMID:22570984

  12. Solvable Aggregation-Migration-Annihilation Processes of a Multispecies System

    Institute of Scientific and Technical Information of China (English)

    KE Jian-Hong; LIN Zhen-Quan; CHEN Xiao-Shuang

    2006-01-01

    An aggregation-migration-annihilation model is proposed for a two-species-group system. In the system,aggregation reactions occur between any two aggregates of the same species and migration reactions between two different species in the same group and joint annihilation reactions between two species from different groups. The kinetics of the system is then investigated in the framework of the mean-field theory. It is found that the scaling solutions of the aggregate size distributions depend crucially on the ratios of the equivalent aggregation rates of species groups to the annihilation rates. Each species always scales according to a conventional or modified scaling form; moreover, the governing scaling exponents are nonuniversal and dependent on the reaction details for most cases.

  13. Consequences of dark matter self-annihilation for galaxy formation

    CERN Document Server

    Natarajan, Priyamvada; Bertone, Gianfranco

    2007-01-01

    Galaxy formation requires a process that continually heats gas and quenches star formation in order to reproduce the observed shape of the luminosity function of bright galaxies. To accomplish this, current models invoke heating from supernovae, and energy injection from active galactic nuclei. However, observations of radio-loud active galactic nuclei suggest that their feedback is likely to not be as efficient as required, signaling the need for additional heating processes. We propose the self-annihilation of weakly interacting massive particles that constitute dark matter as a steady source of heating. In this paper, we explore the circumstances under which this process may provide the required energy input. To do so, dark matter annihilations are incorporated into a galaxy formation model within the Millennium cosmological simulation. Energy input from self-annihilation can compensate for all the required gas cooling and reproduce the observed galaxy luminosity function only for what appear to be extreme...

  14. Generalized creation and annihilation operators via complex nonlinear Riccati equations

    Science.gov (United States)

    Schuch, Dieter; Castaños, Octavio; Rosas-Ortiz, Oscar

    2013-06-01

    Based on Gaussian wave packet solutions of the time-dependent Schrödinger equation, a generalization of the conventional creation and annihilation operators and the corresponding coherent states can be obtained. This generalization includes systems where also the width of the coherent states is time-dependent as they occur for harmonic oscillators with time-dependent frequency or systems in contact with a dissipative environment. The key point is the replacement of the frequency ω0 that occurs in the usual definition of the creation/annihilation operator by a complex time-dependent function that fulfils a nonlinear Riccati equation. This equation and its solutions depend on the system under consideration and on the (complex) initial conditions. Formal similarities also exist with supersymmetric quantum mechanics. The generalized creation and annihilation operators also allow to construct exact analytic solutions of the free motion Schrödinger equation in terms of Hermite polynomials with time-dependent variable.

  15. Molecular model for annihilation rates in positron complexes

    Energy Technology Data Exchange (ETDEWEB)

    Assafrao, Denise [Laboratorio de Atomos e Moleculas Especiais, Departamento de Fisica, ICEx, Universidade Federal de Minas Gerais, P.O. Box 702, 30123-970 Belo Horizonte, MG (Brazil); Department of Applied Mathematics and Theoretical Physics, Queen' s University of Belfast, Belfast BT7 1NN, Northern Ireland (United Kingdom); Walters, H.R. James [Department of Applied Mathematics and Theoretical Physics, Queen' s University of Belfast, Belfast BT7 1NN, Northern Ireland (United Kingdom); Mohallem, Jose R. [Laboratorio de Atomos e Moleculas Especiais, Departamento de Fisica, ICEx, Universidade Federal de Minas Gerais, P.O. Box 702, 30123-970 Belo Horizonte, MG (Brazil); Department of Applied Mathematics and Theoretical Physics, Queen' s University of Belfast, Belfast BT7 1NN, Northern Ireland (United Kingdom)], E-mail: rachid@fisica.ufmg.br

    2008-02-15

    The molecular approach for positron interaction with atoms is developed further. Potential energy curves for positron motion are obtained. Two procedures accounting for the nonadiabatic effective positron mass are introduced for calculating annihilation rate constants. The first one takes the bound-state energy eigenvalue as an input parameter. The second is a self-contained and self-consistent procedure. The methods are tested with quite different states of the small complexes HPs, e{sup +}He (electronic triplet) and e{sup +}Be (electronic singlet and triplet). For states yielding the positronium cluster, the annihilation rates are quite stable, irrespective of the accuracy in binding energies. For the e{sup +}Be states, annihilation rates are larger and more consistent with qualitative predictions than previously reported ones.

  16. A Critical Reevaluation of Radio Constraints on Annihilating Dark Matter

    Energy Technology Data Exchange (ETDEWEB)

    Cholis, Ilias [Fermilab; Hooper, Dan [Fermilab; Linden, Tim [Chicago U., KICP

    2015-04-03

    A number of groups have employed radio observations of the Galactic center to derive stringent constraints on the annihilation cross section of weakly interacting dark matter. In this paper, we show that electron energy losses in this region are likely to be dominated by inverse Compton scattering on the interstellar radiation field, rather than by synchrotron, considerably relaxing the constraints on the dark matter annihilation cross section compared to previous works. Strong convective winds, which are well motivated by recent observations, may also significantly weaken synchrotron constraints. After taking these factors into account, we find that radio constraints on annihilating dark matter are orders of magnitude less stringent than previously reported, and are generally weaker than those derived from current gamma-ray observations.

  17. Applications of positron annihilation to dermatology and skin cancer

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Guang; Chen, Hongmin; Chakka, Lakshmi [Department of Chemistry, University of Missouri-Kansas City, Kansas City, MO 64110 (United States); Gadzia, Joseph E. [Dermatology, Department of Internal Medicine, University of Kansas Medical Center, Kansas City, KS 66103 and Kansas Medical Clinic, Topeka, KS 66614 (United States); Jean, Y.C. [Department of Chemistry, University of Missouri-Kansas City, Kansas City, MO 64110 (United States); R and D Center for Membrane Technology, Chung Yuan Christian University, Chung-Li (China)

    2007-07-01

    Positronium annihilation lifetime experiments have been performed to investigate the interaction between skin cancer and positronium for human skin samples. Positronium annihilation lifetime is found to be shorter and intensity is found to be less for the samples with basal cell carcinoma and squamous cell carcinoma than the normal skin samples. These results indicate a reduction of free volume in the molecular level for the skin with cancer with respect to the skin without cancer. Positron annihilation spectroscopy may be potentially developed as a new noninvasive and external method for dermatology clinics, early detection of cancer, and nano-PET technology in the future. (copyright 2007 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  18. Choice Probability Generating Functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel L; Bierlaire, Michel

    This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications....

  19. Choice probability generating functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel

    2013-01-01

    This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications. The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended...

  20. Qubit persistence probability

    International Nuclear Information System (INIS)

    In this work, I formulate the persistence probability for a qubit device as the probability of measuring its computational degrees of freedom in the unperturbed state without the decoherence arising from environmental interactions. A decoherence time can be obtained from the persistence probability. Drawing on recent work of Garg, and also Palma, Suomine, and Ekert, I apply the persistence probability formalism to a generic single-qubit device coupled to a thermal environment, and also apply it to a trapped-ion quantum register coupled to the ion vibrational modes. (author)

  1. Handbook of probability

    CERN Document Server

    Florescu, Ionut

    2013-01-01

    THE COMPLETE COLLECTION NECESSARY FOR A CONCRETE UNDERSTANDING OF PROBABILITY Written in a clear, accessible, and comprehensive manner, the Handbook of Probability presents the fundamentals of probability with an emphasis on the balance of theory, application, and methodology. Utilizing basic examples throughout, the handbook expertly transitions between concepts and practice to allow readers an inclusive introduction to the field of probability. The book provides a useful format with self-contained chapters, allowing the reader easy and quick reference. Each chapter includes an introductio

  2. Probability of satellite collision

    Science.gov (United States)

    Mccarter, J. W.

    1972-01-01

    A method is presented for computing the probability of a collision between a particular artificial earth satellite and any one of the total population of earth satellites. The collision hazard incurred by the proposed modular Space Station is assessed using the technique presented. The results of a parametric study to determine what type of satellite orbits produce the greatest contribution to the total collision probability are presented. Collision probability for the Space Station is given as a function of Space Station altitude and inclination. Collision probability was also parameterized over miss distance and mission duration.

  3. Real analysis and probability

    CERN Document Server

    Ash, Robert B; Lukacs, E

    1972-01-01

    Real Analysis and Probability provides the background in real analysis needed for the study of probability. Topics covered range from measure and integration theory to functional analysis and basic concepts of probability. The interplay between measure theory and topology is also discussed, along with conditional probability and expectation, the central limit theorem, and strong laws of large numbers with respect to martingale theory.Comprised of eight chapters, this volume begins with an overview of the basic concepts of the theory of measure and integration, followed by a presentation of var

  4. Positron annihilation study of vacancy-type defects in Al single crystal foils with the tweed structures across the surface

    Energy Technology Data Exchange (ETDEWEB)

    Kuznetsov, Pavel, E-mail: kpv@ispms.tsc.ru [National Research Tomsk Polytechnic University, Tomsk, 634050 (Russian Federation); Institute of Strength Physics and Materials Science SB RAS, Tomsk, 634055 (Russian Federation); Cizek, Jacub, E-mail: jcizek@mbox.troja.mff.cuni.cz; Hruska, Petr [Charles University in Prague, Praha, CZ-18000 Czech Republic (Czech Republic); Anwad, Wolfgang [Institut für Strahlenphysik, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, D-01314 Germany (Germany); Bordulev, Yuri; Lider, Andrei; Laptev, Roman [National Research Tomsk Polytechnic University, Tomsk, 634050 (Russian Federation); Mironov, Yuri [Institute of Strength Physics and Materials Science SB RAS, Tomsk, 634055 (Russian Federation)

    2015-10-27

    The vacancy-type defects in the aluminum single crystal foils after a series of the cyclic tensions were studied using positron annihilation. Two components were identified in the positron lifetime spectra associated with the annihilation of free positrons and positrons trapped by dislocations. With increasing number of cycles the dislocation density firstly increases and reaches a maximum value at N = 10 000 cycles but then it gradually decreases and at N = 70 000 cycles falls down to the level typical for the virgin samples. The direct evidence on the formation of a two-phase system “defective near-surface layer/base Al crystal” in aluminum foils at cyclic tension was obtained using a positron beam with the variable energy.

  5. Bremsstrahlung signatures of dark matter annihilation in the Sun

    CERN Document Server

    Fukushima, Keita; Kumar, Jason; Marfatia, Danny

    2012-01-01

    The nonrelativistic annihilation of Majorana dark matter in the Sun to a pair of light fermions is chirality-suppressed. Annihilation to 3-body final states $\\ell^+f^-V$, where $V=W,Z,\\gamma$, and $\\ell$ and $f$ are light fermions (that may be the same), becomes dominant since bremsstrahlung relaxes the chirality suppression. We evaluate the neutrino spectra at the source, including spin and helicity dependent effects, and assess the detectability of each significant bremsstrahlung channel at IceCube/DeepCore. We also show how to combine the sensitivities to the dark matter-nucleon scattering cross section in individual channels, since typically several channels contribute in models.

  6. On the Direct Detection of Dark Matter Annihilation

    DEFF Research Database (Denmark)

    Cherry, John F.; Frandsen, Mads T.; Shoemaker, Ian M.

    2015-01-01

    We investigate the direct detection phenomenology of a class of dark matter (DM) models in which DM does not directly interact with nuclei, {but rather} the products of its annihilation do. When these annihilation products are very light compared to the DM mass, the scattering in direct detection...... cross sections has already been reached in a class of models. Moreover, the compatibility of dark matter direct detection experiments can be compared directly in $E_{{\\rm min}}$ space without making assumptions about DM astrophysics, mass, or scattering form factors. Lastly, when DM has direct couplings...

  7. Significant enhancement of neutralino dark matter annihilation from electroweak bremsstrahlung.

    Science.gov (United States)

    Bringmann, Torsten; Calore, Francesca

    2014-02-21

    Indirect searches for the cosmological dark matter have become ever more competitive during the past years. Here, we report the first full calculation of leading electroweak corrections to the annihilation rate of supersymmetric neutralino dark matter. We find that these corrections can be huge, partially due to contributions that have been overlooked so far. Our results imply a significantly enhanced discovery potential of this well motivated dark matter candidate with current and upcoming cosmic ray experiments, in particular for gamma rays and models with somewhat small annihilation rates at the tree level.

  8. Heavy dark matter annihilation from effective field theory.

    Science.gov (United States)

    Ovanesyan, Grigory; Slatyer, Tracy R; Stewart, Iain W

    2015-05-29

    We formulate an effective field theory description for SU(2)_{L} triplet fermionic dark matter by combining nonrelativistic dark matter with gauge bosons in the soft-collinear effective theory. For a given dark matter mass, the annihilation cross section to line photons is obtained with 5% precision by simultaneously including Sommerfeld enhancement and the resummation of electroweak Sudakov logarithms at next-to-leading logarithmic order. Using these results, we present more accurate and precise predictions for the gamma-ray line signal from annihilation, updating both existing constraints and the reach of future experiments.

  9. Annihilation physics of exotic galactic dark matter particles

    Science.gov (United States)

    Stecker, F. W.

    1990-01-01

    Various theoretical arguments make exotic heavy neutral weakly interacting fermions, particularly those predicted by supersymmetry theory, attractive candidates for making up the large amount of unseen gravitating mass in galactic halos. Such particles can annihilate with each other, producing secondary particles of cosmic-ray energies, among which are antiprotons, positrons, neutrinos, and gamma-rays. Spectra and fluxes of these annihilation products can be calculated, partly by making use of positron electron collider data and quantum chromodynamic models of particle production derived therefrom. These spectra may provide detectable signatures of exotic particle remnants of the big bang.

  10. Remote forcing annihilates barrier layer in southeastern Arabian Sea

    Digital Repository Service at National Institute of Oceanography (India)

    Shenoi, S.S.C.; Shankar, D.; Shetye, S.R.

    -1 GEOPHYSICAL RESEARCH LETTERS, VOL. ???, XXXX, DOI:10.1029/, Remote forcing annihilates barrier layer in southeastern Arabian Sea S. S. C. Shenoi, D. Shankar, and S. R. Shetye National Institute of Oceanography, Goa, India. Time-series measurements... thick barrier layer (BL) exists during March{April ow- ing to a surface layer of low-salinity waters advected earlier during December{January from the Bay of Bengal. The BL is almost annihilated by 7 April owing to upwelling. The relic BL that survives...

  11. Polarization of photons in matter–antimatter annihilation

    Energy Technology Data Exchange (ETDEWEB)

    Moskaliuk, S.S. [Bogolyubov Institute for Theoretical Physics, Metrolohichna Street, 14-b, Kyiv-143, Ukraine, UA-03143 e-mail: mss@bitp.kiev.ua (Ukraine)

    2015-03-10

    In this work we demonstrate the possibility of generation of linear polarization of the electromagnetic field (EMF) due to the quantum effects in matter-antimatter annihilation process for anisotropic space of the I type according to Bianchi. We study the dynamics of this process to estimate the degree of polarisation of the EMF in the external gravitational field of the anisotropic Bianchi I model. It has been established that the quantum effects in matter-antimatter annihilation process in the external gravitational field of the anisotropic Bianchi I model provide contribution to the degree of polarisation of the EMF in quadrupole harmonics.

  12. Dark matter annihilation with s-channel internal Higgsstrahlung

    OpenAIRE

    Jason Kumar(Univ. of Hawaii); Jiajun Liao; Danny Marfatia

    2016-01-01

    We study the scenario of fermionic dark matter that annihilates to standard model fermions through an s-channel axial vector mediator. We point out that the well-known chirality suppression of the annihilation cross section can be alleviated by s-channel internal Higgsstrahlung. The shapes of the cosmic ray spectra are identical to that of t-channel internal Higgsstrahlung in the limit of a heavy mediating particle. Unlike the general case of t-channel bremsstrahlung, s-channel Higgsstrahlung...

  13. Revisiting Bremsstrahlung emission associated with Light Dark Matter annihilations

    OpenAIRE

    Boehm, C; Uwer, P.

    2006-01-01

    We compute the single bremsstrahlung emission associated with the pair annihilation of spin-0 particles into electrons and positrons, via the t-channel exchange of a heavy fermion. We compare our result with the work of Beacom et al. . Unlike what is stated in the literature, we show that the Bremsstrahlung cross section is not necessarily given by the tree-level annihilation cross section (for a generalized kinematics) times a factor related to the emission of a soft photon. Such a factoriza...

  14. Positron annihilation studies on SmS and Smsub(0.85)Ysub(0.15)S

    International Nuclear Information System (INIS)

    Angular distribution of annihilation photons has been measured in SmS and Smsub(0.85) Ysub(0.15)S. The distribution curves show that SmS is ionic at NTP and Smsub(0.85) Ysub(0.15)S is metallic with an intermediate valence of 2.73 for samarium ion. The large density of states due to the f-level appears around 3 mrad in the momentum density distribution. The results are in agreement with the available Mossbauer and lattice constant data. (author)

  15. X-ray photoelectron spectroscopy and positron annihilation spectroscopy analysis of surfactant affected FePt spintronic films

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Chun, E-mail: fengchun@ustb.edu.cn [Department of Materials Physics and Chemistry, University of Science and Technology Beijing, Beijing 100083 (China); Li, Xujing; Liu, Fen; Wang, Qiang [Department of Materials Physics and Chemistry, University of Science and Technology Beijing, Beijing 100083 (China); Yang, Meiyin [Department of Materials Physics and Chemistry, University of Science and Technology Beijing, Beijing 100083 (China); The Center for Micromagnetics and Information Technologies (MINT) and Department of Electrical and Computer Engineering, University of Minnesota, 200 Union St SE, Minneapolis, MN 55455 (United States); Zhao, Chongjun [Department of Materials Physics and Chemistry, University of Science and Technology Beijing, Beijing 100083 (China); Gong, Kui [Centre for the Physics of Materials and Department of Physics, McGill University, Montreal, Quebec, H3A2T8 Canada (Canada); Zhang, Peng; Wang, Bao-Yi; Cao, Xing-Zhong [Key Laboratory of Nuclear Analysis Techniques, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China); Yu, Guanghua [Department of Materials Physics and Chemistry, University of Science and Technology Beijing, Beijing 100083 (China)

    2014-07-01

    This paper reports the effects of surfactant Bi atomic diffusion on the microstructure evolution and resulted property manipulation in FePt spintronic films by the quantitative studies of X-ray photoelectron spectroscopy and positron annihilation spectroscopy. The defect density in the FePt layer, which was tunable by varying the thermal treatment temperatures, was found to be remarkably enhanced correlated with the Bi atomic diffusion behavior. The observed defect density evolution substantially favors Fe(Pt) atomic migrations and lowers the energy barrier for atomic ordering transition, resulting in a great improvement of hard magnet property of the films.

  16. Introduction to probability

    CERN Document Server

    Freund, John E

    1993-01-01

    Thorough, lucid coverage of permutations and factorials, probabilities and odds, frequency interpretation, mathematical expectation, decision making, postulates of probability, rule of elimination, binomial distribution, geometric distribution, standard deviation, law of large numbers, and much more. Exercises with some solutions. Summary. Bibliography. Includes 42 black-and-white illustrations. 1973 edition.

  17. Probability, Nondeterminism and Concurrency

    DEFF Research Database (Denmark)

    Varacca, Daniele

    Nondeterminism is modelled in domain theory by the notion of a powerdomain, while probability is modelled by that of the probabilistic powerdomain. Some problems arise when we want to combine them in order to model computation in which both nondeterminism and probability are present. In particula...

  18. Dynamic update with probabilities

    NARCIS (Netherlands)

    J. van Benthem; J. Gerbrandy; B. Kooi

    2009-01-01

    Current dynamic-epistemic logics model different types of information change in multi-agent scenarios. We generalize these logics to a probabilistic setting, obtaining a calculus for multi-agent update with three natural slots: prior probability on states, occurrence probabilities in the relevant pr

  19. Origin and annihilation physics of positrons in the Galaxy

    International Nuclear Information System (INIS)

    A gamma radiation at 511 keV is observed since the early 1970's toward the Galactic bulge region. This emission is the signature of a large number of electron-positron annihilations, the positron being the electron's antiparticle. Unfortunately, the origin of the positrons responsible for this emission is still a mystery. Many positron-source candidates have been suggested but none of them can account for the galactic annihilation emission. The spatial distribution of this emission is indeed very atypical. Since 2002, the SPI spectrometer onboard the INTEGRAL space laboratory revealed an emission strongly concentrated toward the galactic bulge and a weaker emission from the galactic disk. This morphology is unusual because it does not correspond to any of the known galactic astrophysical-object or interstellar-matter distributions. The assumption that positrons annihilate close to their sources (i.e. the spatial distribution of the annihilation emission reflects the spatial distribution of the sources) has consequently been called into question. Recent studies suggest that positrons could propagate far away from their sources before annihilating. This physical aspect could be the key point to solve the riddle of the galactic positron origin. This thesis is devoted to the modelling of the propagation and annihilation of positrons in the Galaxy, in order to compare simulated spatial models of the annihilation emission with recent measurements provided by SPI/INTEGRAL. This method allows to put constraints on the origin of galactic positrons. We therefore developed a propagation Monte-Carlo code of positrons within the Galaxy in which we implemented all the theoretical and observational knowledge about positron physics (sources, transport modes, energy losses, annihilation modes) and the interstellar medium of our Galaxy (interstellar gas distributions, galactic magnetic fields, structures of the gaseous phases). Due to uncertainties in several physical parameters

  20. Time and probability in quantum cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Greensite, J. (San Francisco State Univ., CA (USA). Dept. of Physics and Astronomy)

    1990-10-01

    A time function, an exactly conserved probability measure, and a time-evolution equation (related to the Wheeler-DeWitt equation) are proposed for quantum cosmology. The time-integral of the probability measure is the measure proposed by Hawking and Page. The evolution equation reduces to the Schroedinger equation, and probability measure to the Born measure, in the WKB approximation. The existence of this 'Schroedinger-limit', which involves a cancellation of time-dependencies in the probability density between the WKB prefactor and integration measure, is a consequence of laplacian factor ordering in the Wheeler-DeWitt equation. (orig.).

  1. Economy, probability and risk

    Directory of Open Access Journals (Sweden)

    Elena Druica

    2007-05-01

    Full Text Available The science of probabilities has earned a special place because it tried through its concepts to build a bridge between theory and experimenting.As a formal notion which by definition does not lead to polemic, probability, nevertheless, meets a series of difficulties of interpretation whenever the probability must be applied to certain particular situations.Usually, the economic literature brings into discussion two interpretations of the concept of probability:the objective interpretation often found under the name of frequency or statistical interpretation and the subjective or personal interpretation. Surprisingly, the third appproach is excluded:the logical interpretation.The purpose of the present paper is to study some aspects of the subjective and logical interpretation of the probability, aswell as the implications in the economics.

  2. Exact Probability Distribution versus Entropy

    Directory of Open Access Journals (Sweden)

    Kerstin Andersson

    2014-10-01

    Full Text Available The problem  addressed concerns the determination of the average number of successive attempts  of guessing  a word of a certain  length consisting of letters with given probabilities of occurrence. Both first- and second-order approximations  to a natural language are considered.  The guessing strategy used is guessing words in decreasing order of probability. When word and alphabet sizes are large, approximations  are necessary in order to estimate the number of guesses.  Several  kinds of approximations  are discussed demonstrating moderate requirements regarding both memory and central processing unit (CPU time. When considering realistic  sizes of alphabets and words (100, the number of guesses can be estimated  within minutes with reasonable accuracy (a few percent and may therefore constitute an alternative to, e.g., various entropy expressions.  For many probability  distributions,  the density of the logarithm of probability products is close to a normal distribution. For those cases, it is possible to derive an analytical expression for the average number of guesses. The proportion  of guesses needed on average compared to the total number  decreases almost exponentially with the word length. The leading term in an asymptotic  expansion can be used to estimate the number of guesses for large word lengths. Comparisons with analytical lower bounds and entropy expressions are also provided.

  3. Probability distributions for multimeric systems.

    Science.gov (United States)

    Albert, Jaroslav; Rooman, Marianne

    2016-01-01

    We propose a fast and accurate method of obtaining the equilibrium mono-modal joint probability distributions for multimeric systems. The method necessitates only two assumptions: the copy number of all species of molecule may be treated as continuous; and, the probability density functions (pdf) are well-approximated by multivariate skew normal distributions (MSND). Starting from the master equation, we convert the problem into a set of equations for the statistical moments which are then expressed in terms of the parameters intrinsic to the MSND. Using an optimization package on Mathematica, we minimize a Euclidian distance function comprising of a sum of the squared difference between the left and the right hand sides of these equations. Comparison of results obtained via our method with those rendered by the Gillespie algorithm demonstrates our method to be highly accurate as well as efficient.

  4. Spatially resolved positron annihilation spectroscopy on friction stir weld induced defects

    Directory of Open Access Journals (Sweden)

    Karin Hain, Christoph Hugenschmidt, Philip Pikart and Peter Böni

    2010-01-01

    Full Text Available A friction stir welded (FSW Al alloy sample was investigated by Doppler broadening spectroscopy (DBS of the positron annihilation line. The spatially resolved defect distribution showed that the material in the joint zone becomes completely annealed during the welding process at the shoulder of the FSW tool, whereas at the tip, annealing is prevailed by the deterioration of the material due to the tool movement. This might be responsible for the increased probability of cracking in the heat affected zone of friction stir welds. Examination of a material pairing of steel S235 and the Al alloy Silafont36 by coincident Doppler broadening spectroscopy (CDBS indicates the formation of annealed steel clusters in the Al alloy component of the sample. The clear visibility of Fe in the CDB spectra is explained by the very efficient trapping at the interface between steel cluster and bulk.

  5. Abstract Models of Probability

    Science.gov (United States)

    Maximov, V. M.

    2001-12-01

    Probability theory presents a mathematical formalization of intuitive ideas of independent events and a probability as a measure of randomness. It is based on axioms 1-5 of A.N. Kolmogorov 1 and their generalizations 2. Different formalized refinements were proposed for such notions as events, independence, random value etc., 2,3, whereas the measure of randomness, i.e. numbers from [0,1], remained unchanged. To be precise we mention some attempts of generalization of the probability theory with negative probabilities 4. From another side the physicists tryed to use the negative and even complex values of probability to explain some paradoxes in quantum mechanics 5,6,7. Only recently, the necessity of formalization of quantum mechanics and their foundations 8 led to the construction of p-adic probabilities 9,10,11, which essentially extended our concept of probability and randomness. Therefore, a natural question arises how to describe algebraic structures whose elements can be used as a measure of randomness. As consequence, a necessity arises to define the types of randomness corresponding to every such algebraic structure. Possibly, this leads to another concept of randomness that has another nature different from combinatorical - metric conception of Kolmogorov. Apparenly, discrepancy of real type of randomness corresponding to some experimental data lead to paradoxes, if we use another model of randomness for data processing 12. Algebraic structure whose elements can be used to estimate some randomness will be called a probability set Φ. Naturally, the elements of Φ are the probabilities.

  6. Pseudoscalar boson production in e+e- annihilation

    International Nuclear Information System (INIS)

    A mechanism of pseudoscalar boson production by the heavy quark-antiquark pair via one-photon exchange in e+e- annihilation is studied. The total cross sections of this reaction and energy distributions of produced p0-boson are obtained

  7. The many faces of brane-flux annihilation

    CERN Document Server

    Gautason, Fridrik Freyr; Van Riet, Thomas

    2015-01-01

    Fluxes can decay via the nucleation of Brown-Teitelboim bubbles, but when the decaying fluxes induce D-brane charges this process must be accompanied with an annihilation of D-branes. This occurs via dynamics inside the bubble wall as was well described for (anti-)D3 branes branes annihilating against 3-form fluxes. In this paper we extend this to the other Dp branes with p smaller than seven. Generically there are two decay channels: one for the RR flux and one for the NSNS flux. The RR channel is accompanied by brane annihilation that can be understood from the Dp branes polarising into D(p+2) branes, whereas the NSNS channel corresponds to Dp branes polarising into NS5 branes or KK5 branes. We illustrate this with the decay of antibranes probing local toroidal throat geometries obtained from T-duality of the D6 solution in massive type IIA. We show that anti-Dp branes are metastable against annihilation in these backgrounds, at least at the probe level.

  8. The Isotropic Radio Background and Annihilating Dark Matter

    CERN Document Server

    Hooper, Dan; Jeltema, Tesla E; Linden, Tim; Profumo, Stefano; Slatyer, Tracy R

    2012-01-01

    Observations by ARCADE-2 and other telescopes sensitive to low frequency radiation have revealed the presence of an isotropic radio background with a hard spectral index. The intensity of this observed background is found to exceed the flux predicted from astrophysical sources by a factor of approximately 5-6. In this article, we consider the possibility that annihilating dark matter particles provide the primary contribution to the observed isotropic radio background through the emission of synchrotron radiation from electron and positron annihilation products. For reasonable estimates of the magnetic fields present in clusters and galaxies, we find that dark matter could potentially account for the observed radio excess, but only if it annihilates mostly to electrons and/or muons, and only if it possesses a mass in the range of approximately 5-50 GeV. For such models, the annihilation cross section required to normalize the synchrotron signal to the observed excess is sigma v ~ (0.4-30) x 10^-26 cm^3/s, sim...

  9. On the Direct Detection of Dark Matter Annihilation

    CERN Document Server

    Cherry, John F; Shoemaker, Ian M

    2015-01-01

    We investigate the direct detection phenomenology of a class of dark matter (DM) models in which DM does not directly interact with nuclei, {but rather} the products of its annihilation do. When these annihilation products are very light compared to the DM mass, the scattering in direct detection experiments is controlled by relativistic kinematics. This results in a distinctive recoil spectrum, a non-standard and or even {\\it absent} annual modulation, and the ability to probe DM masses as low as a $\\sim$10 MeV. We use current LUX data to show that experimental sensitivity to thermal relic annihilation cross sections has already been reached in a class of models. Moreover, the compatibility of dark matter direct detection experiments can be compared directly in $E_{min}$ space without making assumptions about DM astrophysics. Lastly, when DM has direct couplings to nuclei, the limit from annihilation to relativistic particles in the Sun can be stronger than that of conventional non-relativistic direct detect...

  10. Characteristics of the positron annihilation process in the matter

    International Nuclear Information System (INIS)

    In this report the positrons annihilation spectroscopy, as a method for the matter study is described. The interaction of positrons of high as well as thermal energies are discussed and different models of mentioned interactions are presented. Special attention is paid for positrons interaction with crystal lattice and its defects. The influence of positron beams characteristics on measured values are also discussed

  11. The SIRI stochastic model with creation and annihilation operators

    OpenAIRE

    Stollenwerk, Nico; Aguiar, Maira

    2008-01-01

    We generalize the well known formulation of the susceptibles, infected, susceptibles (SIS) spatial epidemics with creation and annihilation operators to the reinfection model including recovered which can be reinfected, the SIRI model, using ladder operators constructed from the Gell-Mann matrices known in quantum chromodynamics.

  12. A simple formula for the thermal pair annihilation line emissivity

    OpenAIRE

    Svensson, Roland; Larsson, Stefan; Poutanen, Juri

    1996-01-01

    We introduce a simple and convenient fitting formula for the thermal annihilation line from pair plasmas in cosmic sources. The fitting formula is accurate to 0.04\\% and is valid at all photon energies and temperatures of interest. The commonly used Gaussian line profile is not a good approximation for broader lines.

  13. Annihilation amplitudes and factorization in B to phi Kstar

    CERN Document Server

    Epele, L N; Szynkman, A

    2003-01-01

    We study the decay $B^\\pm\\to \\phi K^{\\ast\\pm}$, followed by the decay of the outgoing vector mesons into two pseudoscalars. The analysis of angular distributions of the decay products is shown to provide useful information about the annihilation contributions and possible tests of factorization.

  14. Positron Annihilation in a Rubber Modified Epoxy Resin

    DEFF Research Database (Denmark)

    Mogensen, O. E.; Jacobsen, F. M.; Pethrick, R. A.

    1979-01-01

    Positron annihilation data is reported on a rubber-modified epoxy resin. Studies of the temperature dependence of the o-positronium lifetime indicated the existence of three distinct regions; the associated transition temperatures by comparison with dilatometric data can be ascribed respectively...

  15. Nuclear excitation in positron-K-electron annihilation

    Energy Technology Data Exchange (ETDEWEB)

    Kaliman, Z.; Pisk, K.; Logan, B.A.

    1987-05-01

    We have calculated the cross section for nuclear excitation during positron-K-electron annihilation. The calculations allow for the effect of the nuclear Coulomb field and for relativistic effects. The results are compared to earlier predictions which were derived using the Born approximation, and to renormalized Born approximation predictions. Our calculated cross sections are well below the available experimental values.

  16. Detection of positron-atom bound states through resonant annihilation

    CERN Document Server

    Dzuba, V A; Gribakin, G F

    2010-01-01

    A method is proposed for detecting positron-atom bound states by observing Feshbach resonances in positron annihilation at electron volt energies. The method is applicable to a range of open-shell transition metal atoms which are likely to bind the positron: Si, Fe, Co, Ni, Ge, Tc, Ru, Rh, Sn, Sb, Ta, W, Os, Ir, and Pt.

  17. The Effects of Dark Matter Annihilation on Cosmic Reionization

    CERN Document Server

    Kaurov, Alexander A; Gnedin, Nickolay Y

    2015-01-01

    We revisit the possibility of constraining the properties of dark matter (DM) by studying the epoch of cosmic reionization. Previous studies have shown that DM annihilation was unlikely to have provided a large fraction of the photons that ionized the universe, but instead played a subdominant role relative to stars and quasars. The DM, however, begins to efficiently annihilate with the formation of primordial microhalos at $z\\sim100-200$, much earlier than the formation of the first stars. Therefore, if DM annihilation ionized the universe at even the percent level over the interval $z \\sim 20-100$, it can leave a significant imprint on the global optical depth, $\\tau$. Moreover, we show that cosmic microwave background (CMB) polarization data and future 21 cm measurements will enable us to more directly probe the DM contribution to the optical depth. In order to compute the annihilation rate throughout the epoch of reionization, we adopt the latest results from structure formation studies and explore the im...

  18. Constraints on Dark Matter annihilation from M87

    International Nuclear Information System (INIS)

    Clusters of galaxies and their central cD galaxies are prime targets for observing indirect signatures of dark matter annihilation owing to their huge mass concentration. The main challenge is to discriminate between high-energy emission of different origins, for example the emission from active galactic nuclei as a result of accretion of mass by the supermassive black hole at the centre of the host galaxy and the emission due to dark matter annihilation. In addition to prompt gamma rays, dark matter annihilation products can include energetic electrons and positrons which inverse Compton scatter with the cosmic microwave background or with starlight photon fields to produce potentially detectable signals going from the soft to the hard X-ray energy band. In order to constrain the dark matter annihilation emission component, a state-of-the-art radiation code for the M87 jet emission and a generic description of the prompt and secondary inverse-Compton gamma rays due to generic weakly interacting dark matter particles are employed and possibilities for identifying the signatures of dark matter in the multi-wavelength spectrum of M87 are investigated.

  19. Positron annihilation spectra and core-electron enhancement factors

    CERN Document Server

    Green, D G

    2014-01-01

    $\\gamma$-spectra for positron annihilation with core and valence electrons in the noble gases are calculated using many-body theory (MBT). We show that proper inclusion of core annihilation is crucial to accurately describe the measured spectra [Phys. Rev. Lett. 79, 39 (1997)]. We use the MBT to calculate `exact' enhancement factors $\\bar{\\gamma}_{n\\ell}$ for annihilation on individual $n\\ell$ subshells. They parameterize the important effects of (non-local) short-range electron-positron correlations, including the non-perturbative process of virtual positronium formation. We show that they follow a simple and physically motivated scaling with the subshell ionization energy $I_{n\\ell}$: $\\bar{\\gamma}_{n\\ell}=1+\\sqrt{A/I_{n\\ell}}+(B/I_{n\\ell})^{\\beta}$, where $A$, $B$ and $\\beta$ are positive constants. We suggest that this formula can be used with relatively simple independent-particle-approximation calculations to determine accurate core-annihilation spectra for atoms across the periodic table and in condens...

  20. QED at fifth order in e+e- annihilation

    International Nuclear Information System (INIS)

    We calculate the cross-section for the production of e+e-γγγ in e+e- annihilation. The results are in agreement with the first observation of such a process by the ASP experiment at SLAC. The calculation also provides another example of the power of 'spinor techniques' in calculating Feynman amplitudes. (author)

  1. Probability and Measure

    CERN Document Server

    Billingsley, Patrick

    2012-01-01

    Praise for the Third Edition "It is, as far as I'm concerned, among the best books in math ever written....if you are a mathematician and want to have the top reference in probability, this is it." (Amazon.com, January 2006) A complete and comprehensive classic in probability and measure theory Probability and Measure, Anniversary Edition by Patrick Billingsley celebrates the achievements and advancements that have made this book a classic in its field for the past 35 years. Now re-issued in a new style and format, but with the reliable content that the third edition was revered for, this

  2. The concept of probability

    International Nuclear Information System (INIS)

    The concept of probability is now, and always has been, central to the debate on the interpretation of quantum mechanics. Furthermore, probability permeates all of science, as well as our every day life. The papers included in this volume, written by leading proponents of the ideas expressed, embrace a broad spectrum of thought and results: mathematical, physical epistemological, and experimental, both specific and general. The contributions are arranged in parts under the following headings: Following Schroedinger's thoughts; Probability and quantum mechanics; Aspects of the arguments on nonlocality; Bell's theorem and EPR correlations; Real or Gedanken experiments and their interpretation; Questions about irreversibility and stochasticity; and Epistemology, interpretation and culture. (author). refs.; figs.; tabs

  3. $\\overline{p}p$ annihilation into four charged pions at rest and in flight

    CERN Document Server

    Salvini, P; Filippini, V; Fontana, A; Montagna, P; Panzarasa, A; Rotondi, A; Bargiotti, M; Bertin, A; Bruschi, M; Capponi, M; Carbone, A; De Castro, S; Fabbri, Franco Luigi; Faccioli, P; Galli, D; Giacobbe, B; Grimaldi, F; Marconi, U; Massa, I; Piccinini, M; Cesari, N S; Spighi, R; Vecchi, S; Villa, M; Vitale, A; Zoccoli, A; Bianconi, A; Corradini, M; Donzella, A; Lodi-Rizzini, E; Venturelli, L; Zenoni, A; Cicalò, C; De Falco, A; Masoni, A; Puddu, G; Serci, S; Usai, G L; Gorchakov, O E; Prakhov, S N; Rozhdestvensky, A M; Sapozhnikov, M G; Poli, M; Gianotti, P; Guaraldo, C; Lanaro, A; Lucherini, V; Petrascu, C; Panzieri, D; Balestra, F; Bussa, M P; Busso, L; Cerello, P; Denisov, O; Ferrero, L; Grasso, A; Maggiora, A; Tosello, F; Botta, E; Bressani, Tullio; Calvo, D; De Mori, F; Feliciello, A; Filippi, A; Marcello, S; Agnello, M; Iazzi, F

    2004-01-01

    The spin-parity analysis of the data on the pp to 2 pi /sup +/ 2 pi /sup $/annihilation reaction at rest in liquid and in gaseous hydrogen at 3 bar pressure and in flight at p momentum of approximately= 50 MeV/c, collected by the Obelix spectrometer at the LEAR complex of CERN, is presented. The relative branching ratios (a /sub 1/ (1260) to sigma pi )/(a/sub 1/(1260) to rho pi ) = 0.06 +or- 0.05 and ( pi (1300) to sigma pi )/( pi (1300) to rho pi ) = 2.2 +or- 0.4 are obtained. It is also shown that the inclusion of the exotic meson pi /sub 1/(1400), J/sup PC/ = 1/sup -+/, mass and width M = 1.384 +or- 0.028, Gamma = 0.378 +or- 0.058 GeV/c/sup 2/, in its decay to rho pi , improves the fit and some implications of these results are briefly discussed. The relative S and P-wave annihilation percentages in four charged pions at two target densities are obtained. (53 refs).

  4. Constraints on dark matter annihilations from diffuse gamma-ray emission in the Galaxy

    CERN Document Server

    Tavakoli, Maryam; Evoli, Carmelo; Ullio, Piero

    2014-01-01

    Recent advances in gamma-ray cosmic ray, infrared and radio astronomy have allowed us to develop a significantly better understanding of the galactic medium properties in the last few years. In this work using the DRAGON code, that numerically solves the CR propagation equation and calculating gamma-ray emissivities in a 2-dimensional grid enclosing the Galaxy, we study in a self consistent manner models for the galactic diffuse gamma-ray emission. Our models are cross-checked to both the available CR and gamma-ray data. We address the extend to which dark matter annihilations in the Galaxy can contribute to the diffuse gamma-ray flux towards different directions on the sky. Moreover we discuss the impact that astrophysical uncertainties of non DM nature, have on the derived gamma-ray limits. Such uncertainties are related to the diffusion properties on the Galaxy, the interstellar gas and the interstellar radiation field energy densities. Light ~10 GeV dark matter annihilating dominantly to hadrons is more s...

  5. Detection of SUSY Signals in Stau Neutralino Co-annihilation Region at the LHC

    CERN Document Server

    Arnowitt, R; Dutta, B; Kamon, T; Korev, N; Simeon, P; Toback, D; Wagner, P

    2007-01-01

    We study the prospects of detecting the signal in the stau neutralino co-annihilation region at the LHC using tau leptons. The co-annihilation signal is characterized by the stau and neutralino mass difference (dM) to be 5-15 GeV to be consistent with the WMAP measurement of the cold dark matter relic density as well as all other experimental bounds within the minimal supergravity model. Focusing on tau's from neutralino_2 --> tau stau --> tau tau neutralino_1 decays in gluino and squark production, we consider inclusive MET+jet+3tau production, with two tau's above a high E_T threshold and a third tau above a lower threshold. Two observables, the number of opposite-signed tau pairs minus the number of like-signed tau pairs and the peak position of the di-tau invariant mass distribution, allow for the simultaneous determination of dM and M_gluino. For dM = 9 GeV and M_gluino = 850 GeV with 30 fb^-1 of data, we can measure dM to 15% and $M_gluino to 6%.

  6. Detection of SUSY Signals in Stau Neutralino Co-Annihilation Region at the LHC

    Science.gov (United States)

    Arnowitt, R.; Aurisano, A.; Dutta, B.; Kamon, T.; Kolev, N.; Simeon, P.; Toback, D.; Wagner, P.

    2007-04-01

    We study the prospects of detecting the signal in the stau neutralino co-annihilation region at the LHC using tau leptons. The co-annihilation signal is characterized by the stau and neutralino mass difference (dM) to be 5-15 GeV to be consistent with the WMAP measurement of the cold dark matter relic density as well as all other experimental bounds within the minimal supergravity model. Focusing on tau's from neutralino_2 --> tau stau --> tau tau neutralino_1 decays in gluino and squark production, we consider inclusive MET+jet+3tau production, with two tau's above a high E_T threshold and a third tau above a lower threshold. Two observables, the number of opposite-signed tau pairs minus the number of like-signed tau pairs and the peak position of the di-tau invariant mass distribution, allow for the simultaneous determination of dM and M_gluino. For dM = 9 GeV and M_gluino = 850 GeV with 30 fb^-1 of data, we can measure dM to 15% and M_gluino to 6%.

  7. Positron annihilation lifetime in Fe-Rh alloys deformed by high-speed compression

    Energy Technology Data Exchange (ETDEWEB)

    Fukuzumi, M. [Graduate School of Engineering, Osaka Prefecture Univ., Sakai (Japan); Hori, F.; Oshima, R. [Research Inst. for Advanced Science and Technology, Osaka Prefecture Univ., Sakai (Japan); Komatsu, M.; Kiritani, M. [Research Center for Ultra-High-Speed Plastic Deformation, Hiroshima Inst. of Tech., Saekiku (Japan)

    2001-07-01

    In order to examine the role of structural vacancies in the stress-induced phase transitions of B2-type FeRh alloys, Fe-40, 45 and 50 at%Rh specimens were deformed at room temperature with a high-speed compression machine and were studied by X-ray diffractometry (XRD) and positron annihilation measurements. It was found from the positron lifetime results that vacancies or vacancy clusters were introduced into the alloy by deformation. The longer lifetime ({tau}{sub 2}) components were changed with the deformation momenta and Rh concentrations. In the case of Fe-50 at%Rh, they were 188 ps and 254 ps after deformation with small or large momenta, respectively. Taking the X-ray results into consideration, it is concluded that an atom movement mechanism forming large vacancy clusters is associated with the B2-A1 transition. The short lifetime ({tau}{sub 1}) of the alloy is accounted for by bulk annihilation in the transformed phases and a high density of dislocations. (orig.)

  8. Dark matter annihilation radiation in hydrodynamic simulations of Milky Way haloes

    CERN Document Server

    Schaller, Matthieu; Theuns, Tom; Calore, Francesca; Bertone, Gianfranco; Bozorgnia, Nassim; Crain, Robert A; Fattahi, Azadeh; Navarro, Julio F; Sawala, Till; Schaye, Joop

    2015-01-01

    We obtain predictions for the properties of cold dark matter annihilation radiation using high resolution hydrodynamic zoom-in cosmological simulations of Milky Way-like galaxies carried out as part of the "Evolution and Assembly of GaLaxies and their Environments" (EAGLE) programme. Galactic halos in the simulation have significantly different properties from those assumed by the "standard halo model" often used in dark matter detection studies. The formation of the galaxy causes a contraction of the dark matter halo, whose density profile develops a steeper slope than the Navarro-Frenk-White profile between $r\\approx1.5~\\rm{kpc}$ and $r\\approx10~\\rm{kpc}$, and a flatter slope at smaller radii. The inner regions of the halos are almost perfectly spherical (axis ratios $b/a > 0.96$ within $r=500~\\rm{pc}$) and there is no offset larger than $45~\\rm{pc}$ between the centre of the stellar distribution and the centre of the dark halo. The morphology of the predicted dark matter annihilation radiation signal is in...

  9. Search for photon-linelike signatures from dark matter annihilations with H.E.S.S.

    Science.gov (United States)

    Abramowski, A; Acero, F; Aharonian, F; Akhperjanian, A G; Anton, G; Balenderan, S; Balzer, A; Barnacka, A; Becherini, Y; Becker Tjus, J; Bernlöhr, K; Birsin, E; Biteau, J; Bochow, A; Boisson, C; Bolmont, J; Bordas, P; Brucker, J; Brun, F; Brun, P; Bulik, T; Carrigan, S; Casanova, S; Cerruti, M; Chadwick, P M; Chaves, R C G; Cheesebrough, A; Colafrancesco, S; Cologna, G; Conrad, J; Couturier, C; Dalton, M; Daniel, M K; Davids, I D; Degrange, B; Deil, C; deWilt, P; Dickinson, H J; Djannati-Ataï, A; Domainko, W; Drury, L O'C; Dubus, G; Dutson, K; Dyks, J; Dyrda, M; Egberts, K; Eger, P; Espigat, P; Fallon, L; Farnier, C; Fegan, S; Feinstein, F; Fernandes, M V; Fernandez, D; Fiasson, A; Fontaine, G; Förster, A; Füßling, M; Gajdus, M; Gallant, Y A; Garrigoux, T; Gast, H; Giebels, B; Glicenstein, J F; Glück, B; Göring, D; Grondin, M-H; Häffner, S; Hague, J D; Hahn, J; Hampf, D; Harris, J; Heinz, S; Heinzelmann, G; Henri, G; Hermann, G; Hillert, A; Hinton, J A; Hofmann, W; Hofverberg, P; Holler, M; Horns, D; Jacholkowska, A; Jahn, C; Jamrozy, M; Jung, I; Kastendieck, M A; Katarzyński, K; Katz, U; Kaufmann, S; Khélifi, B; Klepser, S; Klochkov, D; Kluźniak, W; Kneiske, T; Komin, Nu; Kosack, K; Kossakowski, R; Krayzel, F; Krüger, P P; Laffon, H; Lamanna, G; Lefaucheur, J; Lemoine-Goumard, M; Lenain, J-P; Lennarz, D; Lohse, T; Lopatin, A; Lu, C-C; Marandon, V; Marcowith, A; Masbou, J; Maurin, G; Maxted, N; Mayer, M; McComb, T J L; Medina, M C; Méhault, J; Menzler, U; Moderski, R; Mohamed, M; Moulin, E; Naumann, C L; Naumann-Godo, M; de Naurois, M; Nedbal, D; Nekrassov, D; Nguyen, N; Niemiec, J; Nolan, S J; Ohm, S; de Oña Wilhelmi, E; Opitz, B; Ostrowski, M; Oya, I; Panter, M; Parsons, R D; Paz Arribas, M; Pekeur, N W; Pelletier, G; Perez, J; Petrucci, P-O; Peyaud, B; Pita, S; Pühlhofer, G; Punch, M; Quirrenbach, A; Raue, M; Reimer, A; Reimer, O; Renaud, M; de Los Reyes, R; Rieger, F; Ripken, J; Rob, L; Rosier-Lees, S; Rowell, G; Rudak, B; Rulten, C B; Sahakian, V; Sanchez, D A; Santangelo, A; Schlickeiser, R; Schulz, A; Schwanke, U; Schwarzburg, S; Schwemmer, S; Sheidaei, F; Skilton, J L; Sol, H; Spengler, G; Stawarz, L; Steenkamp, R; Stegmann, C; Stinzing, F; Stycz, K; Sushch, I; Szostek, A; Tavernet, J-P; Terrier, R; Tluczykont, M; Trichard, C; Valerius, K; van Eldik, C; Vasileiadis, G; Venter, C; Viana, A; Vincent, P; Völk, H J; Volpe, F; Vorobiov, S; Vorster, M; Wagner, S J; Ward, M; White, R; Wierzcholska, A; Wouters, D; Zacharias, M; Zajczyk, A; Zdziarski, A A; Zech, A; Zechlin, H-S

    2013-01-25

    Gamma-ray line signatures can be expected in the very-high-energy (E(γ)>100 GeV) domain due to self-annihilation or decay of dark matter (DM) particles in space. Such a signal would be readily distinguishable from astrophysical γ-ray sources that in most cases produce continuous spectra that span over several orders of magnitude in energy. Using data collected with the H.E.S.S. γ-ray instrument, upper limits on linelike emission are obtained in the energy range between ∼ 500 GeV and ∼ 25 TeV for the central part of the Milky Way halo and for extragalactic observations, complementing recent limits obtained with the Fermi-LAT instrument at lower energies. No statistically significant signal could be found. For monochromatic γ-ray line emission, flux limits of (2 × 10(-7) -2 × 10(-5)) m(-2) s(-1) sr(-1) and (1 × 10(-8) -2 × 10(-6)) m(-2) s(-1)sr(-1) are obtained for the central part of the Milky Way halo and extragalactic observations, respectively. For a DM particle mass of 1 TeV, limits on the velocity-averaged DM annihilation cross section ⟨σv⟩(χχ → γγ) reach ∼ 10(-27) cm(3)s(-1), based on the Einasto parametrization of the Galactic DM halo density profile.

  10. Cosmological and astrophysical signatures of dark matter annihilations into pseudo-Goldstone bosons

    International Nuclear Information System (INIS)

    We investigate a model where the dark matter particle is a chiral fermion field charged under a global U(1) symmetry which is assumed to be spontaneously broken, leading to a pseudo-Goldstone boson (PGB). We argue that the dark matter annihilation into PGBs determine the dark matter relic abundance. Besides, we also note that experimental searches for PGBs allow either for a very long lived PGB, with a lifetime much longer than the age of the Universe, or a relatively short lived PGB, with a lifetime shorter than one minute. Hence, two different scenarios arise, producing very different signatures. In the long lived PGB scenario, the PGB might contribute significantly to the radiation energy density of the Universe. On the other hand, in the short lived PGB scenario, and since the decay length is shorter than one parsec, the s-wave annihilation into a PGB and a CP even dark scalar in the Galactic center might lead to an intense box feature in the gamma-ray energy spectrum, provided the PGB decay branching ratio into two photons is sizable. We also analyze the constraints on these two scenarios from thermal production, the Higgs invisible decay width and direct dark matter searches

  11. Probabilities in physics

    CERN Document Server

    Hartmann, Stephan

    2011-01-01

    Many results of modern physics--those of quantum mechanics, for instance--come in a probabilistic guise. But what do probabilistic statements in physics mean? Are probabilities matters of objective fact and part of the furniture of the world, as objectivists think? Or do they only express ignorance or belief, as Bayesians suggest? And how are probabilistic hypotheses justified and supported by empirical evidence? Finally, what does the probabilistic nature of physics imply for our understanding of the world? This volume is the first to provide a philosophical appraisal of probabilities in all of physics. Its main aim is to make sense of probabilistic statements as they occur in the various physical theories and models and to provide a plausible epistemology and metaphysics of probabilities. The essays collected here consider statistical physics, probabilistic modelling, and quantum mechanics, and critically assess the merits and disadvantages of objectivist and subjectivist views of probabilities in these fie...

  12. Stochastic Programming with Probability

    CERN Document Server

    Andrieu, Laetitia; Vázquez-Abad, Felisa

    2007-01-01

    In this work we study optimization problems subject to a failure constraint. This constraint is expressed in terms of a condition that causes failure, representing a physical or technical breakdown. We formulate the problem in terms of a probability constraint, where the level of "confidence" is a modelling parameter and has the interpretation that the probability of failure should not exceed that level. Application of the stochastic Arrow-Hurwicz algorithm poses two difficulties: one is structural and arises from the lack of convexity of the probability constraint, and the other is the estimation of the gradient of the probability constraint. We develop two gradient estimators with decreasing bias via a convolution method and a finite difference technique, respectively, and we provide a full analysis of convergence of the algorithms. Convergence results are used to tune the parameters of the numerical algorithms in order to achieve best convergence rates, and numerical results are included via an example of ...

  13. Probability an introduction

    CERN Document Server

    Grimmett, Geoffrey

    2014-01-01

    Probability is an area of mathematics of tremendous contemporary importance across all aspects of human endeavour. This book is a compact account of the basic features of probability and random processes at the level of first and second year mathematics undergraduates and Masters' students in cognate fields. It is suitable for a first course in probability, plus a follow-up course in random processes including Markov chains. A special feature is the authors' attention to rigorous mathematics: not everything is rigorous, but the need for rigour is explained at difficult junctures. The text is enriched by simple exercises, together with problems (with very brief hints) many of which are taken from final examinations at Cambridge and Oxford. The first eight chapters form a course in basic probability, being an account of events, random variables, and distributions - discrete and continuous random variables are treated separately - together with simple versions of the law of large numbers and the central limit th...

  14. Probability and Bayesian statistics

    CERN Document Server

    1987-01-01

    This book contains selected and refereed contributions to the "Inter­ national Symposium on Probability and Bayesian Statistics" which was orga­ nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa­ pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub­ jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...

  15. Probability and Statistical Inference

    OpenAIRE

    Prosper, Harrison B.

    2006-01-01

    These lectures introduce key concepts in probability and statistical inference at a level suitable for graduate students in particle physics. Our goal is to paint as vivid a picture as possible of the concepts covered.

  16. Probability in physics

    CERN Document Server

    Hemmo, Meir

    2012-01-01

    What is the role and meaning of probability in physical theory, in particular in two of the most successful theories of our age, quantum physics and statistical mechanics? Laws once conceived as universal and deterministic, such as Newton‘s laws of motion, or the second law of thermodynamics, are replaced in these theories by inherently probabilistic laws. This collection of essays by some of the world‘s foremost experts presents an in-depth analysis of the meaning of probability in contemporary physics. Among the questions addressed are: How are probabilities defined? Are they objective or subjective? What is their  explanatory value? What are the differences between quantum and classical probabilities? The result is an informative and thought-provoking book for the scientifically inquisitive. 

  17. Estimating Subjective Probabilities

    DEFF Research Database (Denmark)

    Andersen, Steffen; Fountain, John; Harrison, Glenn W.;

    either construct elicitation mechanisms that control for risk aversion, or construct elicitation mechanisms which undertake “calibrating adjustments” to elicited reports. We illustrate how the joint estimation of risk attitudes and subjective probabilities can provide the calibration adjustments that...

  18. Probability with Roulette

    Science.gov (United States)

    Marshall, Jennings B.

    2007-01-01

    This article describes how roulette can be used to teach basic concepts of probability. Various bets are used to illustrate the computation of expected value. A betting system shows variations in patterns that often appear in random events.

  19. Positron and positronium annihilation in silica-based thin films studied by a pulsed positron beam

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, R. E-mail: r-suzuki@aist.go.jp; Ohdaira, T.; Kobayashi, Y.; Ito, K.; Shioya, Y.; Ishimaru, T

    2003-11-01

    Positron and positronium annihilation in silica-based thin films has been investigated by means of measurement techniques with a monoenergetic pulsed positron beam. The age-momentum correlation study revealed that positron annihilation in thermally grown SiO{sub 2} is basically the same as that in bulk amorphous SiO{sub 2} while o-Ps in the PECVD grown SiCOH film predominantly annihilate with electrons of C and H at the microvoid surfaces. We also discuss time-dependent three-gamma annihilation in porous low-k films by two-dimensional positron annihilation lifetime spectroscopy.

  20. Monte Carlo transition probabilities

    OpenAIRE

    Lucy, L. B.

    2001-01-01

    Transition probabilities governing the interaction of energy packets and matter are derived that allow Monte Carlo NLTE transfer codes to be constructed without simplifying the treatment of line formation. These probabilities are such that the Monte Carlo calculation asymptotically recovers the local emissivity of a gas in statistical equilibrium. Numerical experiments with one-point statistical equilibrium problems for Fe II and Hydrogen confirm this asymptotic behaviour. In addition, the re...

  1. Asteroidal collision probabilities

    Science.gov (United States)

    Bottke, William F., Jr.; Greenberg, Richard

    1993-01-01

    Several past calculations of collision probabilities between pairs of bodies on independent orbits have yielded inconsistent results. We review the methodologies and identify their various problems. Greenberg's (1982) collision probability formalism (now with a corrected symmetry assumption) is equivalent to Wetherill's (1967) approach, except that it includes a way to avoid singularities near apsides. That method shows that the procedure by Namiki and Binzel (1991) was accurate for those cases where singularities did not arise.

  2. Quantum computing and probability.

    Science.gov (United States)

    Ferry, David K

    2009-11-25

    Over the past two decades, quantum computing has become a popular and promising approach to trying to solve computationally difficult problems. Missing in many descriptions of quantum computing is just how probability enters into the process. Here, we discuss some simple examples of how uncertainty and probability enter, and how this and the ideas of quantum computing challenge our interpretations of quantum mechanics. It is found that this uncertainty can lead to intrinsic decoherence, and this raises challenges for error correction.

  3. Launch Collision Probability

    Science.gov (United States)

    Bollenbacher, Gary; Guptill, James D.

    1999-01-01

    This report analyzes the probability of a launch vehicle colliding with one of the nearly 10,000 tracked objects orbiting the Earth, given that an object on a near-collision course with the launch vehicle has been identified. Knowledge of the probability of collision throughout the launch window can be used to avoid launching at times when the probability of collision is unacceptably high. The analysis in this report assumes that the positions of the orbiting objects and the launch vehicle can be predicted as a function of time and therefore that any tracked object which comes close to the launch vehicle can be identified. The analysis further assumes that the position uncertainty of the launch vehicle and the approaching space object can be described with position covariance matrices. With these and some additional simplifying assumptions, a closed-form solution is developed using two approaches. The solution shows that the probability of collision is a function of position uncertainties, the size of the two potentially colliding objects, and the nominal separation distance at the point of closest approach. ne impact of the simplifying assumptions on the accuracy of the final result is assessed and the application of the results to the Cassini mission, launched in October 1997, is described. Other factors that affect the probability of collision are also discussed. Finally, the report offers alternative approaches that can be used to evaluate the probability of collision.

  4. Neutrino flavor ratios as diagnostic of solar WIMP annihilation

    Science.gov (United States)

    Lehnert, Ralf; Weiler, Thomas J.

    2008-06-01

    We consider the neutrino (and antineutrino) flavors arriving at the Earth for neutrinos produced in the annihilation of weakly interacting massive particles (WIMPs) in the sun’s core. Solar-matter effects on the flavor propagation of the resulting ≳GeV neutrinos are studied analytically within a density-matrix formalism. Matter effects, including mass-state level crossings, influence the flavor fluxes considerably. The exposition herein is somewhat pedagogical, in that it starts with adiabatic evolution of single flavors from the sun’s center, with θ13 set to zero, and progresses to fully realistic processing of the flavor ratios expected in WIMP decay, from the sun’s core to the Earth. In the fully realistic calculation, nonadiabatic level crossing is included, as are possible nonzero values for θ13 and the CP-violating phase δ. Because of resonance enhancement in matter, nonzero values of θ13 even smaller than a degree can noticeably affect flavor propagation. Both normal and inverted neutrino-mass hierarchies are considered. Our main conclusion is that measuring flavor ratios (in addition to energy spectra) of ≳GeV solar neutrinos can provide discrimination between WIMP models. In particular, we demonstrate the flavor differences at the Earth for neutrinos from the two main classes of WIMP final states, namely W+W- and 95%bb¯+5%τ+τ-. Conversely, if WIMP properties were to be learned from production in future accelerators, then the flavor ratios of ≳GeV solar neutrinos might be useful for inferring θ13 and the mass hierarchy. From the full calculations, we find (and prove) some general features: a flavor-democratic flux produced at the sun’s core arrives at the Earth still flavor democratic; for maximal θ32 but arbitrary θ21 and θ13, the replacement δ→π-δ leaves the νe flavor spectra unaltered but interchanges νμ and ντ spectra at the Earth; and, only for neutrinos in the inverted hierarchy and antineutrinos in the normal

  5. Integrated statistical modelling of spatial landslide probability

    Science.gov (United States)

    Mergili, M.; Chu, H.-J.

    2015-09-01

    Statistical methods are commonly employed to estimate spatial probabilities of landslide release at the catchment or regional scale. Travel distances and impact areas are often computed by means of conceptual mass point models. The present work introduces a fully automated procedure extending and combining both concepts to compute an integrated spatial landslide probability: (i) the landslide inventory is subset into release and deposition zones. (ii) We employ a simple statistical approach to estimate the pixel-based landslide release probability. (iii) We use the cumulative probability density function of the angle of reach of the observed landslide pixels to assign an impact probability to each pixel. (iv) We introduce the zonal probability i.e. the spatial probability that at least one landslide pixel occurs within a zone of defined size. We quantify this relationship by a set of empirical curves. (v) The integrated spatial landslide probability is defined as the maximum of the release probability and the product of the impact probability and the zonal release probability relevant for each pixel. We demonstrate the approach with a 637 km2 study area in southern Taiwan, using an inventory of 1399 landslides triggered by the typhoon Morakot in 2009. We observe that (i) the average integrated spatial landslide probability over the entire study area corresponds reasonably well to the fraction of the observed landside area; (ii) the model performs moderately well in predicting the observed spatial landslide distribution; (iii) the size of the release zone (or any other zone of spatial aggregation) influences the integrated spatial landslide probability to a much higher degree than the pixel-based release probability; (iv) removing the largest landslides from the analysis leads to an enhanced model performance.

  6. Spatial correlation properties and phase singularity annihilation of Gaussian Schell-model beams in the focal region

    Institute of Scientific and Technical Information of China (English)

    Liu Pu-Sheng; Pan Liu-Zhan; Lü Bai-Da

    2008-01-01

    By using the generalized Debye diffraction integral,this paper studies the spatial correlation properties and phase singularity annihilation of apertured Gaussian Schell-model (GSM) beams in the focal region.It is shown that the width of the spectral degree of coherence can be larger,less than or equal to the corresponding width of spectral density,which depends not only on the scalar coherence length of the beams,but also on the truncation parameter.With a gradual increase of the truncation parameter,a pair of phase singularities of the spectral degree of coherence in the focal plane approaches each other,resulting in subwavelength structures.Finally,the annihilation of pairs of phase singularities takes place at a certain value of the truncation parameter.With increasing scalar coherence length,the annihilation occurs at the larger truncation parameter.However,the creation process of phase singularities outside the focal plane is not found for GSM beams.

  7. Does the gamma-ray signal from the central Milky Way indicate Sommerfeld enhancement of dark matter annihilation?

    Science.gov (United States)

    Chan, Man-Ho

    2016-10-01

    Recently, some studies showed that the GeV gamma-ray excess signal from the central Milky Way can be explained by the annihilation of ∼ 40 GeV dark matter through the bb¯ channel. Based on the morphology of the gamma-ray flux, the best-fit inner slope of the dark matter density profile is γ = 1.26. However, recent analyses of the Milky Way dark matter profile favor γ = 0.6 – 0.8. In this article, we show that the GeV gamma-ray excess can also be explained by the Sommerfeld-enhanced dark matter annihilation through the bb¯ channel with γ = 0.85 – 1.05. We constrain the parameters of the Sommerfeld-enhanced annihilation by using data from Fermi-LAT. We also show that the predicted gamma-ray fluxes emitted from dwarf galaxies generally satisfy recent upper limits on gamma-ray fluxes detected by Fermi-LAT.

  8. Does the gamma-ray signal from the central Milky Way indicate Sommerfeld enhancement of dark matter annihilation?

    Science.gov (United States)

    Chan, Man-Ho

    2016-10-01

    Recently, some studies showed that the GeV gamma-ray excess signal from the central Milky Way can be explained by the annihilation of ˜ 40 GeV dark matter through the bb¯ channel. Based on the morphology of the gamma-ray flux, the best-fit inner slope of the dark matter density profile is γ = 1.26. However, recent analyses of the Milky Way dark matter profile favor γ = 0.6 – 0.8. In this article, we show that the GeV gamma-ray excess can also be explained by the Sommerfeld-enhanced dark matter annihilation through the bb¯ channel with γ = 0.85 – 1.05. We constrain the parameters of the Sommerfeld-enhanced annihilation by using data from Fermi-LAT. We also show that the predicted gamma-ray fluxes emitted from dwarf galaxies generally satisfy recent upper limits on gamma-ray fluxes detected by Fermi-LAT.

  9. Constraint on dark matter annihilation with dark star formation using Fermi extragalactic diffuse gamma-ray background data

    CERN Document Server

    Yuan, Qiang; Zhang, Bing; Chen, Xuelei

    2011-01-01

    It has been proposed that during the formation of the first generation stars there might be a "dark star" phase in which the power of the star comes from dark matter annihilation. The adiabatic contraction process to form the dark star would result in a highly concentrated density profile of the host halo at the same time, which may give enhanced indirect detection signals of dark matter. In this work we investigate the extragalactic $\\gamma$-ray background from dark matter annihilation with such a dark star formation scenario, and employ the isotropic $\\gamma$-ray data from Fermi-LAT to constrain the model parameters of dark matter. The results suffer from large uncertainties of both the formation rate of the first generation stars and the subsequent evolution effects of the host halos of the dark stars. We find, in the most optimistic case for $\\gamma$-ray production via dark matter annihilation, the expected extragalactic $\\gamma$-ray flux will be enhanced by 1-2 orders of magnitude. In such a case, the an...

  10. Does the gamma-ray signal from the central Milky Way indicate Sommerfeld enhancement of dark matter annihilation?

    CERN Document Server

    Chan, Man Ho

    2016-01-01

    Recently, Daylan et al. (2014) show that the GeV gamma-ray excess signal from the central Milky Way can be explained by the annihilation of $\\sim 40$ GeV dark matter through $b\\bar{b}$ channel. Based on the morphology of the gamma-ray flux, the best-fit inner slope of the dark matter density profile is $\\gamma=1.26$. However, recent analyses of Milky Way dark matter profile favor $\\gamma=0.6-0.8$. In this article, we show that the GeV gamma-ray excess can also be explained by the Sommerfeld-enhanced dark matter annihilation through $b\\bar{b}$ channel with $\\gamma=0.85-1.05$. We constrain the parameters of the Sommerfeld-enhanced annihilation by using the data from Fermi-LAT. We also show that the predicted gamma-ray fluxes emitted from dwarf galaxies generally satisfy the recent upper limits of gamma-ray fluxes detected by Fermi-LAT.

  11. Conserved Densities of the Black-Scholes Equation

    Institute of Scientific and Technical Information of China (English)

    QIN Mao-Chang; MEI Feng-Xiang; SHANG Mei

    2005-01-01

    @@ A class of new conserved densities of the Black-Scholes equation are constructed by using the multiplier that is derived from the result of divergence expression annihilation under the full Euler operator.The method does not depend on the symmetries of the Black-Scholes equation.These conserved densities can be expressed by solutions of the classical heat equation.

  12. The characterization of the gamma-ray signal from the central Milky Way: A case for annihilating dark matter

    Science.gov (United States)

    Daylan, Tansu; Finkbeiner, Douglas P.; Hooper, Dan; Linden, Tim; Portillo, Stephen K. N.; Rodd, Nicholas L.; Slatyer, Tracy R.

    2016-06-01

    Past studies have identified a spatially extended excess of ˜1-3 GeV gamma rays from the region surrounding the Galactic Center, consistent with the emission expected from annihilating dark matter. We revisit and scrutinize this signal with the intention of further constraining its characteristics and origin. By applying cuts to the Fermi event parameter CTBCORE, we suppress the tails of the point spread function and generate high resolution gamma-ray maps, enabling us to more easily separate the various gamma-ray components. Within these maps, we find the GeV excess to be robust and highly statistically significant, with a spectrum, angular distribution, and overall normalization that is in good agreement with that predicted by simple annihilating dark matter models. For example, the signal is very well fit by a 36-51 GeV dark matter particle annihilating to b b ¯ with an annihilation cross section of σv =(1 - 3) × 10-26cm3 / s (normalized to a local dark matter density of 0.4 GeV /cm3). Furthermore, we confirm that the angular distribution of the excess is approximately spherically symmetric and centered around the dynamical center of the Milky Way (within ˜ 0.05∘ of Sgr A∗), showing no sign of elongation along the Galactic Plane. The signal is observed to extend to at least ≃ 10∘ from the Galactic Center, which together with its other morphological traits disfavors the possibility that this emission originates from previously known or modeled pulsar populations.

  13. Experimental Probability in Elementary School

    Science.gov (United States)

    Andrew, Lane

    2009-01-01

    Concepts in probability can be more readily understood if students are first exposed to probability via experiment. Performing probability experiments encourages students to develop understandings of probability grounded in real events, as opposed to merely computing answers based on formulae.

  14. The pleasures of probability

    CERN Document Server

    Isaac, Richard

    1995-01-01

    The ideas of probability are all around us. Lotteries, casino gambling, the al­ most non-stop polling which seems to mold public policy more and more­ these are a few of the areas where principles of probability impinge in a direct way on the lives and fortunes of the general public. At a more re­ moved level there is modern science which uses probability and its offshoots like statistics and the theory of random processes to build mathematical descriptions of the real world. In fact, twentieth-century physics, in embrac­ ing quantum mechanics, has a world view that is at its core probabilistic in nature, contrary to the deterministic one of classical physics. In addition to all this muscular evidence of the importance of probability ideas it should also be said that probability can be lots of fun. It is a subject where you can start thinking about amusing, interesting, and often difficult problems with very little mathematical background. In this book, I wanted to introduce a reader with at least a fairl...

  15. POSITRON ANNIHILATION AND CONDUCTIVITY MEASUREMENTS ON POLYANILINE

    Institute of Scientific and Technical Information of China (English)

    彭治林; 刘皓; 等

    1994-01-01

    The positron lifetime spectra and electrical conductivities have been measured for polyaniline as a function of protonation level ([H+] from 10-7-100.8 mol/L)。We observed that (1) the short lifetime τ1,which was related to electron density in bulk,decreased with the protonation level;(2) the intermediate lifetime τ2≈360ps,almost remaining constant,whereas its intensity I2 increased with increasing protonation level which was related to the conductivity of meaterial.These results are discussed in terms of conducting island model.

  16. Probabilities from Envariance

    CERN Document Server

    Zurek, W H

    2004-01-01

    I show how probabilities arise in quantum physics by exploring implications of {\\it environment - assisted invariance} or {\\it envariance}, a recently discovered symmetry exhibited by entangled quantum systems. Envariance of perfectly entangled states can be used to rigorously justify complete ignorance of the observer about the outcome of any measurement on either of the members of the entangled pair. Envariance leads to Born's rule, $p_k \\propto |\\psi_k|^2$. Probabilities derived in this manner are an objective reflection of the underlying state of the system -- they reflect experimentally verifiable symmetries, and not just a subjective ``state of knowledge'' of the observer. Envariance - based approach is compared with and found superior to the key pre-quantum definitions of probability including the {\\it standard definition} based on the `principle of indifference' due to Laplace, and the {\\it relative frequency approach} advocated by von Mises. Implications of envariance for the interpretation of quantu...

  17. Choice probability generating functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel

    2010-01-01

    This paper establishes that every random utility discrete choice model (RUM) has a representation that can be characterized by a choice-probability generating function (CPGF) with specific properties, and that every function with these specific properties is consistent with a RUM. The choice...... probabilities from the RUM are obtained from the gradient of the CPGF. Mixtures of RUM are characterized by logarithmic mixtures of their associated CPGF. The paper relates CPGF to multivariate extreme value distributions, and reviews and extends methods for constructing generating functions for applications....... The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended to competing risk survival models....

  18. The Pauli equation for probability distributions

    International Nuclear Information System (INIS)

    The tomographic-probability distribution for a measurable coordinate and spin projection is introduced to describe quantum states as an alternative to the density matrix. An analogue of the Pauli equation for the spin-1/2 particle is obtained for such a probability distribution instead of the usual equation for the wavefunction. Examples of the tomographic description of Landau levels and coherent states of a charged particle moving in a constant magnetic field are presented. (author)

  19. The Pauli equation for probability distributions

    Energy Technology Data Exchange (ETDEWEB)

    Mancini, S. [INFM, Dipartimento di Fisica, Universita di Milano, Milan (Italy). E-mail: Stefano.Mancini@mi.infn.it; Man' ko, O.V. [P.N. Lebedev Physical Institute, Moscow (Russian Federation). E-mail: Olga.Manko@sci.lebedev.ru; Man' ko, V.I. [INFM, Dipartimento di Matematica e Fisica, Universita di Camerino, Camerino (Italy). E-mail: Vladimir.Manko@sci.lebedev.ru; Tombesi, P. [INFM, Dipartimento di Matematica e Fisica, Universita di Camerino, Camerino (Italy). E-mail: Paolo.Tombesi@campus.unicam.it

    2001-04-27

    The tomographic-probability distribution for a measurable coordinate and spin projection is introduced to describe quantum states as an alternative to the density matrix. An analogue of the Pauli equation for the spin-1/2 particle is obtained for such a probability distribution instead of the usual equation for the wavefunction. Examples of the tomographic description of Landau levels and coherent states of a charged particle moving in a constant magnetic field are presented. (author)

  20. Introduction to imprecise probabilities

    CERN Document Server

    Augustin, Thomas; de Cooman, Gert; Troffaes, Matthias C M

    2014-01-01

    In recent years, the theory has become widely accepted and has been further developed, but a detailed introduction is needed in order to make the material available and accessible to a wide audience. This will be the first book providing such an introduction, covering core theory and recent developments which can be applied to many application areas. All authors of individual chapters are leading researchers on the specific topics, assuring high quality and up-to-date contents. An Introduction to Imprecise Probabilities provides a comprehensive introduction to imprecise probabilities, includin

  1. Negative Probabilities and Contextuality

    CERN Document Server

    de Barros, J Acacio; Oas, Gary

    2015-01-01

    There has been a growing interest, both in physics and psychology, in understanding contextuality in experimentally observed quantities. Different approaches have been proposed to deal with contextual systems, and a promising one is contextuality-by-default, put forth by Dzhafarov and Kujala. The goal of this paper is to present a tutorial on a different approach: negative probabilities. We do so by presenting the overall theory of negative probabilities in a way that is consistent with contextuality-by-default and by examining with this theory some simple examples where contextuality appears, both in physics and psychology.

  2. Classic Problems of Probability

    CERN Document Server

    Gorroochurn, Prakash

    2012-01-01

    "A great book, one that I will certainly add to my personal library."—Paul J. Nahin, Professor Emeritus of Electrical Engineering, University of New Hampshire Classic Problems of Probability presents a lively account of the most intriguing aspects of statistics. The book features a large collection of more than thirty classic probability problems which have been carefully selected for their interesting history, the way they have shaped the field, and their counterintuitive nature. From Cardano's 1564 Games of Chance to Jacob Bernoulli's 1713 Golden Theorem to Parrondo's 1996 Perplexin

  3. Singlet-triplet annihilation limits exciton yield in poly(3-hexylthiophene)

    CERN Document Server

    Steiner, Florian; Lupton, John M

    2014-01-01

    Control of chain length and morphology in combination with single-molecule spectroscopy techniques provide a comprehensive photophysical picture of excited-state losses in the prototypical conjugated polymer poly(3-hexylthiophene) (P3HT). A universal self-quenching mechanism is revealed, based on singlet-triplet exciton annihilation, which accounts for the dramatic loss in fluorescence quantum yield of a single P3HT chain between its solution (unfolded) and bulk-like (folded) state. Triplet excitons fundamentally limit the fluorescence of organic photovoltaic materials, which impacts on the conversion of singlet excitons to separated charge carriers, decreasing the efficiency of energy harvesting at high excitation densities. Interexcitonic interactions are so effective that a single P3HT chain of >100 kDa weight behaves like a two-level system, exhibiting perfect photon-antibunching.

  4. Positron annihilation study of microvoids in centrifugally atomized 304 stainless steel

    Science.gov (United States)

    Kim, J. Y.; Byrne, J. G.

    1993-03-01

    Positron trapping in microvoids was studied by positron-lifetime and positron Doppler line-shape measurements of centrifugally atomized 304 stainless-steel powder, which was hot-isostatically-press consolidated. This material contained a concentration of several times 1023/m3 of 1.5-nm-diam microvoids. Positron annihilation was strongly influenced by the microvoids in that a very long lifetime component τ3 of about 600 ps resulted. The intensity of the τ3 component decreased with decreasing number density of 1.5 nm microvoids. The Doppler peak shape was found to be much more strongly influenced by microvoids than by any other defects such as precipitates or grain boundaries. In particular microvoids produced significant narrowing of the Doppler distribution shape.

  5. A simple powerful description of meson and baryon flavor formation in e+e- annihilations

    International Nuclear Information System (INIS)

    A simple ansatz, using a fragmentation function which is derived from area law and relativistic string considerations as a production density of hadrons, describes a broad range of e+e- annihilation data without need for the many quark level parameters. This approach inherently provides a semi - absolute baryon-to-meson normalization structure. The model emphasizes intermediate meson (popcorn) production between baryon and antibaryon as a new degree of freedom for baryon production. This ansatz is implemented using a modified version of the Lund JETSET Monte Carlo program, utilizing its parton shower treatment and hadron decay tables. With only ∼5 significant parameters and choices, quite good agreement has been found with a wide range of data at 10, 29 and 91 GeV. (author) 5 figs.; 1 tab

  6. Self-annihilation of inversion domains by high energy defects in III-Nitrides

    Energy Technology Data Exchange (ETDEWEB)

    Koukoula, T.; Kioseoglou, J., E-mail: sifisl@auth.gr; Kehagias, Th.; Komninou, Ph. [Department of Physics, Aristotle University of Thessaloniki, GR-54124 Thessaloniki (Greece); Ajagunna, A. O.; Georgakilas, A. [Microelectronics Research Group, IESL, FORTH, P.O. Box 1385, GR-71110 Heraklion, Crete, Greece and Department of Physics, University of Crete, P.O. Box 2208, GR-71003 Heraklion, Crete (Greece)

    2014-04-07

    Low-defect density InN films were grown on Si(111) by molecular beam epitaxy over an ∼1 μm thick GaN/AlN buffer/nucleation layer. Electron microscopy observations revealed the presence of inverse polarity domains propagating across the GaN layer and terminating at the sharp GaN/InN (0001{sup ¯}) interface, whereas no inversion domains were detected in InN. The systematic annihilation of GaN inversion domains at the GaN/InN interface is explained in terms of indium incorporation on the Ga-terminated inversion domains forming a metal bonded In-Ga bilayer, a structural instability known as the basal inversion domain boundary, during the initial stages of InN growth on GaN.

  7. Positron annihilation study of Fe-ion irradiated reactor pressure vessel model alloys

    Science.gov (United States)

    Chen, L.; Li, Z. C.; Schut, H.; Sekimura, N.

    2016-01-01

    The degradation of reactor pressure vessel steels under irradiation, which results from the hardening and embrittlement caused by a high number density of nanometer scale damage, is of increasingly crucial concern for safe nuclear power plant operation and possible reactor lifetime prolongation. In this paper, the radiation damage in model alloys with increasing chemical complexity (Fe, Fe-Cu, Fe-Cu-Si, Fe-Cu-Ni and Fe-Cu-Ni-Mn) has been studied by Positron Annihilation Doppler Broadening spectroscopy after 1.5 MeV Fe-ion implantation at room temperature or high temperature (290 oC). It is found that the room temperature irradiation generally leads to the formation of vacancy-type defects in the Fe matrix. The high temperature irradiation exhibits an additional annealing effect for the radiation damage. Besides the Cu-rich clusters observed by the positron probe, the results show formation of vacancy-Mn complexes for implantation at low temperatures.

  8. Characterization of defect accumulation in neutron-irradiated Mo by positron annihilation spectroscopy

    DEFF Research Database (Denmark)

    Eldrup, Morten Mostgaard; Li, Meimei; Snead, L.L.;

    2008-01-01

    Positron annihilation lifetime spectroscopy measurements were performed on neutron-irradiated low carbon arc cast Mo. Irradiation took place in the high flux isotope reactor, Oak Ridge National Laboratory, at a temperature of 80 +/- 10 degrees C. Neutron fluences ranged from 2 x 10(21) to 8 x 10......(24) n/m(2) (E > 0.1 MeV), corresponding to displacement damage levels in the range from 7.2 x 10(-5) to 2.8 x 10(-1) displacements per atom (dpa). A high density of submicroscopic cavities was observed in the neutron-irradiated Mo and their size distributions were estimated. Cavities were detected even...

  9. Wigner Function and Phase Probability Distribution of q-Analogueof Squeezed One-Photon State

    Institute of Scientific and Technical Information of China (English)

    FANG Jian-Shu; MENG Xiang-Guo; ZHANG Xiang-Ping; WANG Ji-Suo; LIANG Bao-Long

    2008-01-01

    In this paper, in terms of the technique of integration within an ordered product (IWOP) of operators and the properties of the inverses of q-deformed annihilation and creation operators, normalizable q-analogue of the squeezed one-photon state, which is quite different from one introduced by Song and Fan [Int. 3. Theor. Phys. 41 (2002) 695], is constructed. Moreover, the Wigner function and phase probability distribution of q-analogue of the squeezed one-photon state are examined.

  10. Probabilities and energies to obtain the counting efficiency of electron-capture nuclides, KLMN model

    International Nuclear Information System (INIS)

    An intelligent computer program has been developed to obtain the mathematical formulae to compute the probabilities and reduced energies of the different atomic rearrangement pathways following electron-capture decay. Creation and annihilation operators for Auger and X processes have been introduced. Taking into account the symmetries associated with each process, 262 different pathways were obtained. This model allows us to obtain the influence of the M-electron-capture in the counting efficiency when the atomic number of the nuclide is high

  11. DM rate at NLO and the impact of SUSY-QCD-corrections to (co-)annihilation-processes on neutralino dark matter

    International Nuclear Information System (INIS)

    A powerful method to constrain the parameter space of theories beyond the Standard Model is to compare the predicted dark matter relic density with cosmological precision measurements, in particular with WMAP- and the upcoming Planck-data. On the particle physics side, the main uncertainty on the relic density arises from the (co-)annihilation cross sections of the dark matter particle. After a motivation for including higher order corrections in the prediction of the relic density, the DM rate at NLO-project will be presented, a software package that allows for the computation of the neutralino (co-)annihilation cross sections including SUSY-QCD corrections at the one-loop level and the evaluation of their effect on the relic density using a link to the public codes MicrOMEGAs and DarkSUSY. Recent results of the impact of SUSY-QCD corrections on the neutralino (co-)annihilation cross section as well as further ongoing projects in the context of the DM rate at NLO-project are discussed.

  12. Epistemology and Probability

    CERN Document Server

    Plotnitsky, Arkady

    2010-01-01

    Offers an exploration of the relationships between epistemology and probability in the work of Niels Bohr, Werner Heisenberg, and Erwin Schrodinger; in quantum mechanics; and in modern physics. This book considers the implications of these relationships and of quantum theory for our understanding of the nature of thinking and knowledge in general

  13. Logic and probability

    OpenAIRE

    Quznetsov, G. A.

    2003-01-01

    The propositional logic is generalized on the real numbers field. The logical analog of the Bernoulli independent tests scheme is constructed. The variant of the nonstandard analysis is adopted for the definition of the logical function, which has all properties of the classical probability function. The logical analog of the Large Number Law is deduced from properties of this function.

  14. Counterexamples in probability

    CERN Document Server

    Stoyanov, Jordan M

    2013-01-01

    While most mathematical examples illustrate the truth of a statement, counterexamples demonstrate a statement's falsity. Enjoyable topics of study, counterexamples are valuable tools for teaching and learning. The definitive book on the subject in regards to probability, this third edition features the author's revisions and corrections plus a substantial new appendix.

  15. Logic, Truth and Probability

    OpenAIRE

    Quznetsov, Gunn

    1998-01-01

    The propositional logic is generalized on the real numbers field. The logical analog of the Bernoulli independent tests scheme is constructed. The variant of the nonstandard analysis is adopted for the definition of the logical function, which has all properties of the classical probability function. The logical analog of the Large Number Law is deduced from properties of this function.

  16. Estimating Subjective Probabilities

    DEFF Research Database (Denmark)

    Andersen, Steffen; Fountain, John; Harrison, Glenn W.;

    2014-01-01

    that theory calls for. We illustrate this approach using data from a controlled experiment with real monetary consequences to the subjects. This allows the observer to make inferences about the latent subjective probability, under virtually any well-specified model of choice under subjective risk, while still...

  17. Transition probabilities for atoms

    International Nuclear Information System (INIS)

    Current status of advanced theoretical methods for transition probabilities for atoms and ions is discussed. An experiment on the f values of the resonance transitions of the Kr and Xe isoelectronic sequences is suggested as a test for the theoretical methods

  18. On thermal corrections to near-threshold co-annihilation

    CERN Document Server

    Kim, Seyong

    2016-01-01

    We consider non-relativistic "dark" particles interacting through gauge boson exchange. At finite temperature, gauge exchange is modified in many ways: virtual corrections lead to Debye screening; real corrections amount to frequent scatterings of the heavy particles on light plasma constituents; mixing angles change. In a certain temperature and energy range, these effects are of order unity. Taking them into account in a resummed form, we estimate the near-threshold spectrum of kinetically equilibrated co-annihilating TeV scale particles. Weakly bound states are shown to "melt" below freeze-out, whereas with attractive strong interactions, relevant e.g. for gluinos, bound states boost the co-annihilation rate by a factor 4...80 with respect to the Sommerfeld estimate, thereby perhaps helping to avoid overclosure of the universe. Modestly non-degenerate dark sector masses and a way to combine the contributions of channels with different gauge and spin structures are also discussed.

  19. Antiprotons from dark matter annihilation in the Galaxy: astrophysical uncertainties

    CERN Document Server

    Evoli, Carmelo; Grasso, Dario; Maccione, Luca; Ullio, Piero

    2011-01-01

    The latest years have seen steady progresses in WIMP dark matter (DM) searches, with hints of possible signals suggested by both direct and indirect detection experiments. Antiprotons can play a key role validating those interpretations since they are copiously produced by WIMP annihilations in the Galactic halo, and the secondary antiproton background produced by Cosmic Ray (CR) interactions is predicted with fair accuracy and matches the observed spectrum very well. Using the publicly available numerical DRAGON code, we reconsider antiprotons as a tool to constrain DM models discussing its power and limitations. We provide updated constraints on a wide class of annihilating DM models by comparing our predictions against the most up-to-date ap measurements, taking also into account the latest spectral information on the p, He and other CR nuclei fluxes. Doing that, we probe carefully the uncertainties associated to both secondary and DM originated antiprotons, by using a variety of distinctively different as...

  20. Positron Annihilation Studies of Mesoporous Silica MCM-41

    International Nuclear Information System (INIS)

    Positron annihilation has been used to study the mesoporous silica MCM-41. Lifetime spectra of evacuated MCM-41 indicate a significant contribution from 3γ annihilation events with τ4 = 116 ns and I4 = 24.5 %. This is supported by measurements of the full energy distribution, where MCM-41 shows enhanced counts in the low energy region (below 511 keV) relative to a pure 2γ sample. MCM-41 was also studied under air and oxygen atmospheres. The presence of atmosphere has a significant effect on both the lifetime and Doppler patterns, with both the lifetime data (τ4 and I4) and the 3γ-fraction decreasing with increasing oxygen concentration. This is indicative of paramagnetic quenching of o-Ps by oxygen.

  1. Dark matter annihilation bound from the diffuse gamma ray flux

    Energy Technology Data Exchange (ETDEWEB)

    Kachelriess, M.; /Norwegian U. Sci. Tech.; Serpico, P.D.; /Fermilab

    2007-07-01

    An upper limit on the total annihilation rate of dark matter (DM) has been recently derived from the observed atmospheric neutrino background. It is a very conservative upper bound based on the sole hypothesis that the DM annihilation products are the least detectable final states in the Standard Model (SM), neutrinos. Any other decay channel into SM particles would lead to stronger constraints. We show that comparable bounds are obtained for DM masses around the TeV scale by observations of the diffuse gamma ray flux by EGRET, because electroweak bremsstrahlung leads to non-negligible electromagnetic branching ratios, even if DM particles only couple to neutrinos at tree level. A better mapping and the partial resolution of the diffuse gamma-ray background into astrophysical sources by the GLAST satellite will improve this bound in the near future.

  2. Positron annihilation studies of some charge transfer molecular complexes

    CERN Document Server

    El-Sayed, A; Boraei, A A A

    2000-01-01

    Positron annihilation lifetimes were measured for some solid charge transfer (CT) molecular complexes of quinoline compounds (2,6-dimethylquinoline, 6-methoxyquinoline, quinoline, 6-methylquinoline, 3-bromoquinoline and 2-chloro-4-methylquinoline) as electron donor and picric acid as an electron acceptor. The infrared spectra (IR) of the solid complexes clearly indicated the formation of the hydrogen-bonding CT-complexes. The annihilation spectra were analyzed into two lifetime components using PATFIT program. The values of the average and bulk lifetimes divide the complexes into two groups according to the non-bonding ionization potential of the donor (electron donating power) and the molecular weight of the complexes. Also, it is found that the ionization potential of the donors and molecular weight of the complexes have a conspicuous effect on the average and bulk lifetime values. The bulk lifetime values of the complexes are consistent with the formation of stable hydrogen-bonding CT-complexes as inferred...

  3. Strong CMB Constraint On P-Wave Annihilating Dark Matter

    CERN Document Server

    An, Haipeng; Zhang, Yue

    2016-01-01

    We consider a dark sector consisting of dark matter that is a Dirac fermion and a scalar mediator. This model has been extensively studied in the past. If the scalar couples to the dark matter in a parity conserving manner then dark matter annihilation to two mediators is dominated by the P-wave channel and hence is suppressed at very low momentum. The indirect detection constraint from the anisotropy of the Cosmic Microwave Background is usually thought to be absent in the model because of this suppression. In this letter we show that dark matter annihilation to bound states occurs through the S-wave and hence there is a constraint on the parameter space of the model from the Cosmic Microwave Background.

  4. Rapid thermal co-annihilation through bound states

    CERN Document Server

    Kim, Seyong

    2016-01-01

    The co-annihilation rate of heavy particles close to thermal equilibrium, which plays a role in many classic dark matter scenarios, can be "simulated" in QCD by considering the pair annihilation rate of a heavy quark and antiquark at a temperature of a few hundred MeV. We show that the so-called Sommerfeld factors, parameterizing the rate, can be defined and measured non-perturbatively within the NRQCD framework. Lattice measurements indicate a modest suppression in the octet channel, in reasonable agreement with perturbation theory, and a large enhancement in the singlet channel, much above the perturbative prediction. We suggest that the additional enhancement originates from bound state formation and subsequent decay, omitted in previous estimates of thermal Sommerfeld factors, which were based on Boltzmann equations governing single-particle phase space distributions.

  5. Annihilation Mechanism of Dilepton Emission from Finite Fireball

    CERN Document Server

    Anchishkin, D V; Naryshkin, R; Ruuskanen, P V

    2004-01-01

    Medium-induced modifications of the pion and quark-antiquark annihilation mechanisms of dilepton production during relativistic heavy ion collisions are considered. Due to the dense hadron environment, the pions produced during a collision are effectively confined in a finite volume, in which they live for a finite time which is scaled as the lifetime of a fireball. Our results indicate that, due to the space-time finiteness of the pion system which generates the corresponding quantum randomization, the dilepton rates are finite in the invariant low-mass region M<2m_\\pi. It is found that the spatial finiteness of quark wave functions and the finiteness of the lifetime of excited states result in the same effect for the quark-antiquark annihilation to dileptons. The breaking of the detailed energy-momentum conservation due to the broken translation invariance is discussed.

  6. Tests of models for parton fragmentation in e+e- annihilation

    International Nuclear Information System (INIS)

    We examine the distribution of particles in the three jet events of e+e- annihilation. The data was collected with the PEP-4/Time Projection Chamber detector at 29 GeV center-of-mass energy at PEP. The experimental distributions are compared to the predictions of several fragmentation models which describe the transition of quarks and gluons into hadrons. In particular, our study emphasizes the three fragmentation models which are currently in widest use: the Lund string model, the Webber cluster model and the independent fragmentation model. These three models each possess different Lorentz frame structures for the distribution of hadron sources relative to the overall event c.m. in three jet events. The Lund string and independent fragmentation models are tuned to describe global event properties of our multihadronic annihilation event sample. This tuned Lund string model provides a good description of the distribution of particles between jet axes in three jet events, while the independent fragmentation model does not. We verify that the failure of the independent fragmentation model is not a consequence of parameter tuning or of model variant. The Webber cluster model, which is untuned, does not describe the absolute particle densities between jets but correctly predicts the ratios of those densities, which are less sensitive to the tuning. These results provide evidence that the sources of hadrons are boosted with respect to the overall center-of-mass in three jet events, with components of motion normal to the jet axes. The distribution of particles close to jet axes provides additional support for this conclusion. 94 refs

  7. On Probability Domains

    Science.gov (United States)

    Frič, Roman; Papčo, Martin

    2010-12-01

    Motivated by IF-probability theory (intuitionistic fuzzy), we study n-component probability domains in which each event represents a body of competing components and the range of a state represents a simplex S n of n-tuples of possible rewards-the sum of the rewards is a number from [0,1]. For n=1 we get fuzzy events, for example a bold algebra, and the corresponding fuzzy probability theory can be developed within the category ID of D-posets (equivalently effect algebras) of fuzzy sets and sequentially continuous D-homomorphisms. For n=2 we get IF-events, i.e., pairs ( μ, ν) of fuzzy sets μ, ν∈[0,1] X such that μ( x)+ ν( x)≤1 for all x∈ X, but we order our pairs (events) coordinatewise. Hence the structure of IF-events (where ( μ 1, ν 1)≤( μ 2, ν 2) whenever μ 1≤ μ 2 and ν 2≤ ν 1) is different and, consequently, the resulting IF-probability theory models a different principle. The category ID is cogenerated by I=[0,1] (objects of ID are subobjects of powers I X ), has nice properties and basic probabilistic notions and constructions are categorical. For example, states are morphisms. We introduce the category S n D cogenerated by Sn=\\{(x1,x2,ldots ,xn)in In;sum_{i=1}nxi≤ 1\\} carrying the coordinatewise partial order, difference, and sequential convergence and we show how basic probability notions can be defined within S n D.

  8. Unified treatment of hadronic annihilation and protonium formation in slow collisions of antiprotons with hydrogen atoms

    Science.gov (United States)

    Sakimoto, Kazuhiro

    2013-07-01

    Antiproton (p¯) collisions with hydrogen atoms, resulting in the hadronic process of particle-antiparticle annihilation and the atomic process of protonium (p¯p) formation (or p¯ capture), are investigated theoretically. As the collision energy decreases, the collision time required for the p¯ capture becomes necessarily longer. Then, there is the possibility that the p¯-p annihilation occurs significantly before the p¯ capture process completes. In such a case, one can no longer consider the annihilation decay separately from the p¯ capture process. The present study develops a rigorous unified quantum-mechanical treatment of the annihilation and p¯ capture processes. For this purpose, an R-matrix approach for atomic collisions is extended to have complex-valued R-matrix elements allowing for the hadronic annihilation. Detailed calculations are carried out at low collision energies ranging from 10-8 to 10-1 eV, and the annihilation and the p¯ capture (total and product-state selected) cross sections are reported. Consideration is given to the difference between the direct annihilation occurring during the collision and the annihilation of p¯p occurring after the p¯ capture. The present annihilation process is also compared with the annihilation in two-body p¯+p collisions.

  9. Antiproton-neon annihilation at 57 MeV/c

    CERN Document Server

    Bianconi, A; Bussa, M P; Lodi-Rizzini, E; Venturelli, L; Zenoni, A; Pontecorvo, G B; Guaraldo, C; Balestra, F; Busso, L; Colantoni, M L; Ferrero, A; Ferrero, L; Grasso, A; Maggiora, A; Maggiora, M G; Piragino, G; Tosello, F

    2000-01-01

    The pNe annihilation cross section is measured for the first time in the momentum interval (53/63) MeV/c. About 9000 pictures collected by the Streamer Chamber Collaboration (PS179) at LEAR-CERN have been scanned. Four events are found, corresponding to sigma /sub ann /=2210+or-1105 mb. The result is compared to the set of measurements presently available in the region of low p momentum. (18 refs).

  10. Role of combination vibrations in resonant positron annihilation

    OpenAIRE

    Young, J A; Gribakin, Gleb; Lee, C. M. R.; Surko, C.M.

    2008-01-01

    Positrons can attach to molecules via vibrational Feshbach resonances, leading to very large annihilation rates. The predictions of a recent theory for this process are validated for deuterated methyl halides where all modes are dipole coupled to the incident positron. Data and analysis are presented for methanol and ethylene, demonstrating the importance of combination and overtone resonances and the ability of the theory to account for these features. The mechanism for these resonances and ...

  11. The three-jet rate in \\(e^{+}e^{-}\\) annihilation

    CERN Document Server

    Lovett-Turner, C N

    1994-01-01

    Abstract: Progress has been made on the calculation of \\(R_{3}\\), the three-jet rate in \\(e^{+}e^{-}\\) annihilation, in the \\(k_\\bot \\) (Durham) scheme. Using the coherent branching formalism \\cite{b,c,d}, an explicit expression for \\(R_{3}\\) is calculated. In this, leading and next-to-leading large logarithms (LL and NLL) are resummed to all orders in QCD perturbation theory. In addition to exponentials an error function is involved.

  12. Comments on charm production in electron positron annihilation

    International Nuclear Information System (INIS)

    The circumstances are discussed in which the production of D anti D: DD*: D* anti D* in electron positron annihilation is expected to be in the ratio 1:4:7 suggested by counting the available spin states. The physical significance of the requisite assumptions is discussed. The importance of taking into account the finite detector acceptance is stressed and tests for a possible 3D1 component in the D* are proposed. (author)

  13. Dark Matter Annihilations in the Large Magellanic Cloud

    OpenAIRE

    Gondolo, P.

    1993-01-01

    The flat rotation curve obtained for the outer star clusters of the Large Magellanic Cloud is suggestive of an LMC dark matter halo. From the composite HI and star cluster rotation curve, I estimate the parameters of an isothermal dark matter halo added to a `maximum disk.' I then examine the possibility of detecting high energy gamma-rays from non-baryonic dark matter annihilations in the central region of the Large Magellanic Cloud.

  14. Precision Measurements in Electron-Positron Annihilation: Theory and Experiment

    CERN Document Server

    Chetyrkin, Konstantin

    2016-01-01

    Theory results on precision measurements in electron-positron annihilation at low and high energies are collected. These cover pure QCD calculations as well as mixed electroweak and QCD results, involving light and heavy quarks. The impact of QCD corrections on the $W$-boson mass is discussed and, last not least, the status and the perspectives for the Higgs boson decay rate into $b\\bar b$, $c\\bar c$ and into two gluons.

  15. Gamma Rays from Top-Mediated Dark Matter Annihilations

    OpenAIRE

    Jackson, C.B.(Department of Physics, University of Texas at Arlington, Arlington, TX 76019, USA); Servant, Géraldine; Shaughnessy, Gabe; Tim M.P. Tait; Taoso, Marco

    2013-01-01

    Lines in the energy spectrum of gamma rays are a fascinating experimental signal, which are often considered "smoking gun" evidence of dark matter annihilation. The current generation of gamma ray observatories are currently closing in on parameter space of great interest in the context of dark matter which is a thermal relic. We consider theories in which the dark matter's primary connection to the Standard Model is via the top quark, realizing strong gamma ray lines consistent with a therma...

  16. Generating X-ray lines from annihilating dark matter

    CERN Document Server

    Dudas, Emilian; Mambrini, Yann

    2014-01-01

    We propose different scenarios where a keV dark matter annihilates to produce a monochromatic signal. The process is generated through the exchange of a light scalar of mass of order 300 keV - 50 MeV coupling to photon through loops or higher dimensional operators. For natural values of the couplings and scales, the model can generate a gamma-ray line which can fit with the recently identified 3.5 keV X-ray line.

  17. Complete light annihilation in an ultrathin layer of gold nanoparticles.

    Science.gov (United States)

    Svedendahl, Mikael; Johansson, Peter; Käll, Mikael

    2013-07-10

    We experimentally demonstrate that an incident light beam can be completely annihilated in a single layer of randomly distributed, widely spaced gold nanoparticle antennas. Under certain conditions, each antenna dissipates more than 10 times the number of photons that enter its geometric cross-sectional area. The underlying physics can be understood in terms of a critical coupling to localized plasmons in the nanoparticles or, equivalently, in terms of destructive optical Fano interference and so-called coherent absorption. PMID:23806090

  18. Study of plasma sprayed copper alloy using positron annihilation

    International Nuclear Information System (INIS)

    A positron annihilation technique has been employed to study the microdefects of copper alloy sprayed on a steel substrate by plasma after being compressed in different thickness. The positron lifetime in the alloy is varied with different amount of compression. It is found that the positron lifetime decreases with increasing of the compression deformation. On the contrary, the positron lifetime is increased for normal alloy after deformation

  19. Experimental study of jets in electron-positron-annihilation

    International Nuclear Information System (INIS)

    Data on hadron production by e+e--annihilation at c.m. energies between 30 GeV and 36 GeV are presented and compared with two models both based on first order QCD but using different schemes for the fragmentation of quarks and gluons into hadrons. In one model the fragmentation proceeds along the parton momenta, in the other along the colour-anticolour axes. The data are reproduced better by fragmentation along the colour axes. (orig.)

  20. Does the gamma-ray signal from the central Milky Way indicate Sommerfeld enhancement of dark matter annihilation?

    OpenAIRE

    Chan, Man Ho

    2016-01-01

    Recently, Daylan et al. (2014) show that the GeV gamma-ray excess signal from the central Milky Way can be explained by the annihilation of $\\sim 40$ GeV dark matter through $b\\bar{b}$ channel. Based on the morphology of the gamma-ray flux, the best-fit inner slope of the dark matter density profile is $\\gamma=1.26$. However, recent analyses of Milky Way dark matter profile favor $\\gamma=0.6-0.8$. In this article, we show that the GeV gamma-ray excess can also be explained by the Sommerfeld-e...

  1. Compton-backscattered annihilation radiation from the Galactic Center region

    Science.gov (United States)

    Smith, D. M.; Lin, R. P.; Feffer, P.; Slassi, S.; Hurley, K.; Matteson, J.; Bowman, H. B.; Pelling, R. M.; Briggs, M.; Gruber, D.

    1993-01-01

    On 1989 May 22, the High Energy X-ray and Gamma-ray Observatory for Nuclear Emissions, a balloon-borne high-resolution germanium spectrometer with an 18-deg FOV, observed the Galactic Center (GC) from 25 to 2500 keV. The GC photon spectrum is obtained from the count spectrum by a model-independent method which accounts for the effects of passive material in the instrument and scattering in the atmosphere. Besides a positron annihilation line with a flux of (10.0 +/- 2.4) x 10 exp -4 photons/sq cm s and a full width at half-maximum (FWHM) of (2.9 + 1.0, -1.1) keV, the spectrum shows a peak centered at (163.7 +/- 3.4) keV with a flux of (1.55 +/- 0.47) x 10 exp -3 photons/sq cm s and a FWHM of (24.4 +/- 9.2) keV. The energy range 450-507 keV shows no positronium continuum associated with the annihilation line, with a 2-sigma upper limit of 0.90 on the positronium fraction. The 164 keV feature is interpreted as Compton backscatter of broadened and redshifted annihilation radiation, possibly from the source 1E 1740.7-2942.

  2. Dark matter annihilation via Higgs and gamma-ray channels

    Science.gov (United States)

    Chan, Man Ho

    2016-09-01

    Recent studies show that the GeV gamma-ray excess signal from the Milky Way center can be best explained by ˜ 40 GeV dark matter annihilating via bbar{b} channel. However, the recent observations of the nearby Milky Way dwarf spheroidal satellite galaxies by Fermi-LAT and the radio observations of the Milky Way center and the M31 galaxy tend to rule out this proposal. In this article, we discuss the possibility of the dark matter interpretation of the GeV gamma-ray excess by proposing 130 GeV dark matter annihilating via both Higgs and gamma-ray channels. Recent analyses show that dark matter annihilating via Higgs channel can satisfactorily explain the Milky Way GeV gamma-ray excess observed. We show that this model can satisfy the upper limits of the gamma-ray constraint of the Milky Way dwarf spheroidal satellite galaxies and the constraint from the radio observations of the M31 galaxy.

  3. CALET's sensitivity to Dark Matter annihilation in the galactic halo

    Science.gov (United States)

    Motz, H.; Asaoka, Y.; Torii, S.; Bhattacharyya, S.

    2015-12-01

    CALET (Calorimetric Electron Telescope), installed on the ISS in August 2015, directly measures the electron+positron cosmic rays flux up to 20 TeV. With its proton rejection capability of 1 : 105 and an aperture of 1200 cm2· sr, it will provide good statistics even well above one TeV, while also featuring an energy resolution of 2%, which allows it to detect fine structures in the spectrum. Such structures may originate from Dark Matter annihilation or decay, making indirect Dark Matter search one of CALET's main science objectives among others such as identification of signatures from nearby supernova remnants, study of the heavy nuclei spectra and gamma astronomy. The latest results from AMS-02 on positron fraction and total electron+positron flux can be fitted with a parametrization including a single pulsar as an extra power law source with exponential cut-off, which emits an equal amount of electrons and positrons. This single pulsar scenario for the positron excess is extrapolated into the TeV region and the expected CALET data for this case are simulated. Based on this prediction for CALET data, the sensitivity of CALET to Dark Matter annihilation in the galactic halo has been calculated. It is shown that CALET could significantly improve the limits compared to current data, especially for those Dark Matter candidates that feature a large fraction of annihilation directly into e+ + e-, such as the LKP (Lightest Kaluza-Klein particle).

  4. Antiproton constraints on dark matter annihilations from internal electroweak bremsstrahlung

    International Nuclear Information System (INIS)

    If the dark matter particle is a Majorana fermion, annihilations into two fermions and one gauge boson could have, for some choices of the parameters of the model, a non-negligible cross-section. Using a toy model of leptophilic dark matter, we calculate the constraints on the annihilation cross-section into two electrons and one weak gauge boson from the PAMELA measurements of the cosmic antiproton-to-proton flux ratio. Furthermore, we calculate the maximal astrophysical boost factor allowed in the Milky Way under the assumption that the leptophilic dark matter particle is the dominant component of dark matter in our Universe. These constraints constitute very conservative estimates on the boost factor for more realistic models where the dark matter particle also couples to quarks and weak gauge bosons, such as the lightest neutralino which we also analyze for some concrete benchmark points. The limits on the astrophysical boost factors presented here could be used to evaluate the prospects to detect a gamma-ray signal from dark matter annihilations at currently operating IACTs as well as in the projected CTA

  5. Deduction and Validation of an Eulerian-Eulerian Model for Turbulent Dilute Two-Phase Flows by Means of the Phase Indicator Function---Disperse Elements* Probability Density Function

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A statistical formalism overcoming some conceptualand practical difficulties arising in existing two-phase flow (2PHF)mathematical modelling has been applied to propose a model for dilute2PHF turbulent flows. Phase interaction terms with a clear physical meaning enterthe equations andthe formalism provides some guidelines for the avoidance of closure assumptions orthe rational approximation of these terms. Continuous phase averaged continuity,momentum, turbulent kinetic energy and turbulencedissipation rate equations have been rigorously andsystematically obtained in a single step.These equations display a structure similar to that forsingle-phase flows. It is also assumed thatdispersed phase dynamics is well described by a probability densityfunction (pdf) equation and Eulerian continuity,momentum and fluctuating kinetic energy equations for the dispersedphase are deduced. Anextension of the standard k- turbulencemodel for the continuous phase is used. A gradient transport model is adopted forthe dispersedphase fluctuating fluxes of momentum and kinetic energy at the non-colliding, largeinertia limit. This model is thenused to predict the behaviour of three axisymmetric turbulent jets of air laden withsolid particlesvarying in size and concentration. Qualitative and quantitative numericalpredictions comparereasonably well with the three different sets of experimental results, studying theinfluence ofparticle size, loading ratio and flow confinement velocity.

  6. Transition Probability (Fidelity) and Its Relatives

    OpenAIRE

    Uhlmann, Armin

    2011-01-01

    Transition Probability (fidelity) for pairs of density operators can be defined as "functor" in the hierarchy of "all" quantum systems and also within any quantum system. The introduction of "amplitudes" for density operators allows for a more intuitive treatment of these quantities, also pointing to a natural parallel transport. The latter is governed by a remarkable gauge theory with strong relations to the Riemann-Bures metric.

  7. Search for a Dark Matter annihilation signal from the Galactic Center halo with H.E.S.S

    CERN Document Server

    Abramowski, A; Aharonian, F; Akhperjanian, A G; Anton, G; Barnacka, A; de Almeida, U Barres; Bazer-Bachi, A R; Becherini, Y; Becker, J; Behera, B; Bernlöhr, K; Bochow, A; Boisson, C; Bolmont, J; Bordas, P; Borrel, V; Brucker, J; Brun, F; Brun, P; Bulik, T; Büsching, I; Carrigan, S; Casanova, S; Cerruti, M; Chadwick, P M; Charbonnier, A; Chaves, R C G; Cheesebrough, A; Chounet, L -M; Clapson, A C; Coignet, G; Conrad, J; Dalton, M; Daniel, M K; Davids, I D; Degrange, B; Deil, C; Dickinson, H J; Djannati-Ataï, A; Domainko, W; Drury, L O'C; Dubois, F; Dubus, G; Dyks, J; Dyrda, M; Egberts, K; Eger, P; Espigat, P; Fallon, L; Farnier, C; Fegan, S; Feinstein, F; Fernandes, M V; Fiasson, A; Fontaine, G; Förster, A; Füßling, M; Gallant, Y A; Gast, H; Gérard, L; Gerbig, D; Giebels, B; Glicenstein, J F; Glück, B; Goret, P; Göring, D; Hague, J D; Hampf, D; Hauser, M; Heinz, S; Heinzelmann, G; Henri, G; Hermann, G; Hinton, J A; Hoffmann, A; Hofmann, W; Hofverberg, P; Horns, D; Jacholkowska, A; de Jager, O C; Jahn, C; Jamrozy, M; Jung, I; Kastendieck, M A; Katarzynski, K; Katz, U; Kaufmann, S; Keogh, D; Kerschhaggl, M; Khangulyan, D; Khélifi, B; Klochkov, D; Kluźniak, W; Kneiske, T; Komin, Nu; Kosack, K; Kossakowski, R; Laffon, H; Lamanna, G; Lennarz, D; Lohse, T; Lopatin, A; Lu, C -C; Marandon, V; Marcowith, A; Masbou, J; Maurin, D; Maxted, N; McComb, T J L; Medina, M C; Méhault, J; Moderski, R; Moulin, E; Naumann, C L; Naumann-Godo, M; de Naurois, M; Nedbal, D; Nekrassov, D; Nguyen, N; Nicholas, B; Niemiec, J; Nolan, S J; Ohm, S; Olive, J-F; Wilhelmi, E de Oña; Opitz, B; Ostrowski, M; Panter, M; Arribas, M Paz; Pedaletti, G; Pelletier, G; Petrucci, P -O; Pita, S; Pühlhofer, G; Punch, M; Quirrenbach, A; Raue, M; Rayner, S M; Reimer, A; Reimer, O; Renaud, M; Reyes, R de los; Rieger, F; Ripken, J; Rob, L; Rosier-Lees, S; Rowell, G; Rudak, B; Rulten, C B; Ruppel, J; Ryde, F; Sahakian, V; Santangelo, A; Schlickeiser, R; Schöck, F M; Schönwald, A; Schwanke, U; Schwarzburg, S; Schwemmer, S; Shalchi, A; Sikora, M; Skilton, J L; Sol, H; Spengler, G; Stawarz, Ł; Steenkamp, R; Stegmann, C; Stinzing, F; Sushch, I; Szostek, A; Tavernet, J -P; Terrier, R; Tibolla, O; Tluczykont, M; Valerius, K; van Eldik, C; Vasileiadis, G; Venter, C; Vialle, J P; Viana, A; Vincent, P; Vivier, M; Völk, H J; Volpe, F; Vorobiov, S; Vorster, M; Wagner, S J; Ward, M; Wierzcholska, A; Zajczyk, A; Zdziarski, A A; Zech, A; Zechlin, H -S

    2011-01-01

    A search for a very-high-energy (VHE; >= 100 GeV) gamma-ray signal from self-annihilating particle Dark Matter (DM) is performed towards a region of projected distance r ~ 45-150 pc from the Galactic Center. The background-subtracted gamma-ray spectrum measured with the High Energy Stereoscopic System (H.E.S.S.) gamma-ray instrument in the energy range between 300 GeV and 30 TeV shows no hint of a residual gamma-ray flux. Assuming conventional Navarro-Frenk-White (NFW) and Einasto density profiles, limits are derived on the velocity-weighted annihilation cross section as a function of the DM particle mass. These are among the best reported so far for this energy range. In particular, for the DM particle mass of ~1 TeV, values for above 3 * 10^(-25) cm^3 s^(-1) are excluded for the Einasto density profile. The limits derived here differ much less for the chosen density profile parametrizations, as opposed to limits from gamma-ray observations of dwarf galaxies or the very center of the Milky Way, where the d...

  8. Contributions to quantum probability

    Energy Technology Data Exchange (ETDEWEB)

    Fritz, Tobias

    2010-06-25

    Chapter 1: On the existence of quantum representations for two dichotomic measurements. Under which conditions do outcome probabilities of measurements possess a quantum-mechanical model? This kind of problem is solved here for the case of two dichotomic von Neumann measurements which can be applied repeatedly to a quantum system with trivial dynamics. The solution uses methods from the theory of operator algebras and the theory of moment problems. The ensuing conditions reveal surprisingly simple relations between certain quantum-mechanical probabilities. It also shown that generally, none of these relations holds in general probabilistic models. This result might facilitate further experimental discrimination between quantum mechanics and other general probabilistic theories. Chapter 2: Possibilistic Physics. I try to outline a framework for fundamental physics where the concept of probability gets replaced by the concept of possibility. Whereas a probabilistic theory assigns a state-dependent probability value to each outcome of each measurement, a possibilistic theory merely assigns one of the state-dependent labels ''possible to occur'' or ''impossible to occur'' to each outcome of each measurement. It is argued that Spekkens' combinatorial toy theory of quantum mechanics is inconsistent in a probabilistic framework, but can be regarded as possibilistic. Then, I introduce the concept of possibilistic local hidden variable models and derive a class of possibilistic Bell inequalities which are violated for the possibilistic Popescu-Rohrlich boxes. The chapter ends with a philosophical discussion on possibilistic vs. probabilistic. It can be argued that, due to better falsifiability properties, a possibilistic theory has higher predictive power than a probabilistic one. Chapter 3: The quantum region for von Neumann measurements with postselection. It is determined under which conditions a probability distribution on a

  9. Paradoxes in probability theory

    CERN Document Server

    Eckhardt, William

    2013-01-01

    Paradoxes provide a vehicle for exposing misinterpretations and misapplications of accepted principles. This book discusses seven paradoxes surrounding probability theory.  Some remain the focus of controversy; others have allegedly been solved, however the accepted solutions are demonstrably incorrect. Each paradox is shown to rest on one or more fallacies.  Instead of the esoteric, idiosyncratic, and untested methods that have been brought to bear on these problems, the book invokes uncontroversial probability principles, acceptable both to frequentists and subjectivists. The philosophical disputation inspired by these paradoxes is shown to be misguided and unnecessary; for instance, startling claims concerning human destiny and the nature of reality are directly related to fallacious reasoning in a betting paradox, and a problem analyzed in philosophy journals is resolved by means of a computer program.

  10. Measurement uncertainty and probability

    CERN Document Server

    Willink, Robin

    2013-01-01

    A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.

  11. Contributions to quantum probability

    International Nuclear Information System (INIS)

    Chapter 1: On the existence of quantum representations for two dichotomic measurements. Under which conditions do outcome probabilities of measurements possess a quantum-mechanical model? This kind of problem is solved here for the case of two dichotomic von Neumann measurements which can be applied repeatedly to a quantum system with trivial dynamics. The solution uses methods from the theory of operator algebras and the theory of moment problems. The ensuing conditions reveal surprisingly simple relations between certain quantum-mechanical probabilities. It also shown that generally, none of these relations holds in general probabilistic models. This result might facilitate further experimental discrimination between quantum mechanics and other general probabilistic theories. Chapter 2: Possibilistic Physics. I try to outline a framework for fundamental physics where the concept of probability gets replaced by the concept of possibility. Whereas a probabilistic theory assigns a state-dependent probability value to each outcome of each measurement, a possibilistic theory merely assigns one of the state-dependent labels ''possible to occur'' or ''impossible to occur'' to each outcome of each measurement. It is argued that Spekkens' combinatorial toy theory of quantum mechanics is inconsistent in a probabilistic framework, but can be regarded as possibilistic. Then, I introduce the concept of possibilistic local hidden variable models and derive a class of possibilistic Bell inequalities which are violated for the possibilistic Popescu-Rohrlich boxes. The chapter ends with a philosophical discussion on possibilistic vs. probabilistic. It can be argued that, due to better falsifiability properties, a possibilistic theory has higher predictive power than a probabilistic one. Chapter 3: The quantum region for von Neumann measurements with postselection. It is determined under which conditions a probability distribution on a finite set can occur as the outcome

  12. Fractal probability laws.

    Science.gov (United States)

    Eliazar, Iddo; Klafter, Joseph

    2008-06-01

    We explore six classes of fractal probability laws defined on the positive half-line: Weibull, Frechét, Lévy, hyper Pareto, hyper beta, and hyper shot noise. Each of these classes admits a unique statistical power-law structure, and is uniquely associated with a certain operation of renormalization. All six classes turn out to be one-dimensional projections of underlying Poisson processes which, in turn, are the unique fixed points of Poissonian renormalizations. The first three classes correspond to linear Poissonian renormalizations and are intimately related to extreme value theory (Weibull, Frechét) and to the central limit theorem (Lévy). The other three classes correspond to nonlinear Poissonian renormalizations. Pareto's law--commonly perceived as the "universal fractal probability distribution"--is merely a special case of the hyper Pareto class.

  13. Bayesian Probability Theory

    Science.gov (United States)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  14. Angles as probabilities

    CERN Document Server

    Feldman, David V

    2008-01-01

    We use a probabilistic interpretation of solid angles to generalize the well-known fact that the inner angles of a triangle sum to 180 degrees. For the 3-dimensional case, we show that the sum of the solid inner vertex angles of a tetrahedron T, divided by 2*pi, gives the probability that an orthogonal projection of T onto a random 2-plane is a triangle. More generally, it is shown that the sum of the (solid) inner vertex angles of an n-simplex S, normalized by the area of the unit (n-1)-hemisphere, gives the probability that an orthogonal projection of S onto a random hyperplane is an (n-1)-simplex. Applications to more general polytopes are treated briefly, as is the related Perles-Shephard proof of the classical Gram-Euler relations.

  15. Waste Package Misload Probability

    Energy Technology Data Exchange (ETDEWEB)

    J.K. Knudsen

    2001-11-20

    The objective of this calculation is to calculate the probability of occurrence for fuel assembly (FA) misloads (i.e., Fa placed in the wrong location) and FA damage during FA movements. The scope of this calculation is provided by the information obtained from the Framatome ANP 2001a report. The first step in this calculation is to categorize each fuel-handling events that occurred at nuclear power plants. The different categories are based on FAs being damaged or misloaded. The next step is to determine the total number of FAs involved in the event. Using the information, a probability of occurrence will be calculated for FA misload and FA damage events. This calculation is an expansion of preliminary work performed by Framatome ANP 2001a.

  16. Probability via expectation

    CERN Document Server

    Whittle, Peter

    1992-01-01

    This book is a complete revision of the earlier work Probability which ap­ peared in 1970. While revised so radically and incorporating so much new material as to amount to a new text, it preserves both the aim and the approach of the original. That aim was stated as the provision of a 'first text in probability, de­ manding a reasonable but not extensive knowledge of mathematics, and taking the reader to what one might describe as a good intermediate level'. In doing so it attempted to break away from stereotyped applications, and consider applications of a more novel and significant character. The particular novelty of the approach was that expectation was taken as the prime concept, and the concept of expectation axiomatized rather than that of a probability measure. In the preface to the original text of 1970 (reproduced below, together with that to the Russian edition of 1982) I listed what I saw as the advantages of the approach in as unlaboured a fashion as I could. I also took the view that the text...

  17. Laboratory-tutorial activities for teaching probability

    Directory of Open Access Journals (Sweden)

    Roger E. Feeley

    2006-08-01

    Full Text Available We report on the development of students’ ideas of probability and probability density in a University of Maine laboratory-based general education physics course called Intuitive Quantum Physics. Students in the course are generally math phobic with unfavorable expectations about the nature of physics and their ability to do it. We describe a set of activities used to teach concepts of probability and probability density. Rudimentary knowledge of mechanics is needed for one activity, but otherwise the material requires no additional preparation. Extensions of the activities include relating probability density to potential energy graphs for certain “touchstone” examples. Students have difficulties learning the target concepts, such as comparing the ratio of time in a region to total time in all regions. Instead, they often focus on edge effects, pattern match to previously studied situations, reason about necessary but incomplete macroscopic elements of the system, use the gambler’s fallacy, and use expectations about ensemble results rather than expectation values to predict future events. We map the development of their thinking to provide examples of problems rather than evidence of a curriculum’s success.

  18. Anti-proton and positron Cosmic Rays from Dark Matter annihilation around Intermediate Mass Black Holes

    OpenAIRE

    Lavalle, Julien

    2007-01-01

    Intermediate Mass Black Holes (IMBHs) are candidates to seed the Supermassive Black Holes (SMBHs), and some could still wander in the Galaxy. In the context of annihilating dark matter (DM), they are expected to drive huge annihilation rates, and could therefore significantly enhance the primary cosmic rays (CRs) expected from annihilation of the DM of the Galactic halo. In this proceeding (the original paper is Brun et al. 2007), we briefly explain the method to derive estimates of such exot...

  19. Application of positron annihilation lifetime technique for {gamma}-irradiation stresses study in chalcogenide vitreous semiconductors

    Energy Technology Data Exchange (ETDEWEB)

    Shpotyuk, O.; Golovchak, R.; Kovalskiy, A. [Scientific Research Company ' ' Carat' ' , Stryjska str. 20279031 Lviv (Ukraine); Filipecki, J.; Hyla, M. [Physics Institute, Pedagogical University, Al. Armii Krajowej 13/1542201 Czestochowa (Poland)

    2002-08-01

    The influence of {gamma}-irradiation on the positron annihilation lifetime spectra in chalcogenide vitreous semiconductors of As-Ge-S system has been analysed. The correlations between lifetime data, structural features and chemical compositions of glasses have been discussed. The observed lifetime components are connected with bulk positron annihilation and positron annihilation on various native and {gamma}-induced open volume defects. It is concluded that after {gamma}-irradiation of investigated materials the {gamma}-induced microvoids based on S{sub 1}{sup -}, As{sub 2}{sup -}, and Ge{sub 3}{sup -} coordination defects play the major role in positron annihilation processes. (Abstract Copyright[2002], Wiley Periodicals, Inc.)

  20. Nonuniversal self-similarity in a coagulation-annihilation model with constant kernels

    International Nuclear Information System (INIS)

    The large time dynamics of a two-species coagulation-annihilation system with constant coagulation and annihilation rates is studied analytically when annihilation is complete. A scaling behaviour is observed which varies with the parameter coupling, the annihilation of the two species and which is nonuniversal in the sense that it varies, in some cases, with the initial conditions as well. The latter actually occurs when either the coupling parameter is equal to one, or the initial number of particles is the same for the two species.