We have calculated annihilation probability densities (APD) for positron collisions against He atom and H2 molecule. It was found that direct annihilation prevails at low energies, while annihilation following virtual positronium (Ps) formation is the dominant mechanism at higher energies. In room-temperature collisions (10-2 eV) the APD spread over a considerable extension, being quite similar to the electronic densities of the targets. The capture of the positron in an electronic Feshbach resonance strongly enhanced the annihilation rate in e+-H2 collisions. We also discuss strategies to improve the calculation of the annihilation parameter (Zeff ), after debugging the computational codes of the Schwinger Multichannel Method (SMC). Finally, we consider the inclusion of the Ps formation channel in the SMC and show that effective configurations (pseudo eigenstates of the Hamiltonian of the collision ) are able to significantly reduce the computational effort in positron scattering calculations. Cross sections for electron scattering by polyatomic molecules were obtained in three different approximations: static-exchange (SE); tatic-exchange-plus-polarization (SEP); and multichannel coupling. The calculations for polar targets were improved through the rotational resolution of scattering amplitudes in which the SMC was combined with the first Born approximation (FBA). In general, elastic cross sections (SE and SEP approximations) showed good agreement with available experimental data for several targets. Multichannel calculations for e- -H2O scattering, on the other hand, presented spurious structures at the electronic excitation thresholds (author)
Varella, Marcio Teixeira do Nascimento
2001-12-15
We have calculated annihilation probability densities (APD) for positron collisions against He atom and H{sub 2} molecule. It was found that direct annihilation prevails at low energies, while annihilation following virtual positronium (Ps) formation is the dominant mechanism at higher energies. In room-temperature collisions (10{sup -2} eV) the APD spread over a considerable extension, being quite similar to the electronic densities of the targets. The capture of the positron in an electronic Feshbach resonance strongly enhanced the annihilation rate in e{sup +}-H{sub 2} collisions. We also discuss strategies to improve the calculation of the annihilation parameter (Z{sub eff} ), after debugging the computational codes of the Schwinger Multichannel Method (SMC). Finally, we consider the inclusion of the Ps formation channel in the SMC and show that effective configurations (pseudo eigenstates of the Hamiltonian of the collision ) are able to significantly reduce the computational effort in positron scattering calculations. Cross sections for electron scattering by polyatomic molecules were obtained in three different approximations: static-exchange (SE); tatic-exchange-plus-polarization (SEP); and multichannel coupling. The calculations for polar targets were improved through the rotational resolution of scattering amplitudes in which the SMC was combined with the first Born approximation (FBA). In general, elastic cross sections (SE and SEP approximations) showed good agreement with available experimental data for several targets. Multichannel calculations for e{sup -} -H{sub 2}O scattering, on the other hand, presented spurious structures at the electronic excitation thresholds (author)
Trajectory probability hypothesis density filter
García-Fernández, Ángel F.; Svensson, Lennart
2016-01-01
This paper presents the probability hypothesis density (PHD) filter for sets of trajectories. The resulting filter, which is referred to as trajectory probability density filter (TPHD), is capable of estimating trajectories in a principled way without requiring to evaluate all measurement-to-target association hypotheses. As the PHD filter, the TPHD filter is based on recursively obtaining the best Poisson approximation to the multitrajectory filtering density in the sense of minimising the K...
Probability densities and Lévy densities
Barndorff-Nielsen, Ole Eiler
For positive Lévy processes (i.e. subordinators) formulae are derived that express the probability density or the distribution function in terms of power series in time t. The applicability of the results to finance and to turbulence is briefly indicated.......For positive Lévy processes (i.e. subordinators) formulae are derived that express the probability density or the distribution function in terms of power series in time t. The applicability of the results to finance and to turbulence is briefly indicated....
Probability densities in strong turbulence
Yakhot, Victor
2006-03-01
In this work we, using Mellin’s transform combined with the Gaussian large-scale boundary condition, calculate probability densities (PDFs) of velocity increments P(δu,r), velocity derivatives P(u,r) and the PDF of the fluctuating dissipation scales Q(η,Re), where Re is the large-scale Reynolds number. The resulting expressions strongly deviate from the Log-normal PDF P(δu,r) often quoted in the literature. It is shown that the probability density of the small-scale velocity fluctuations includes information about the large (integral) scale dynamics which is responsible for the deviation of P(δu,r) from P(δu,r). An expression for the function D(h) of the multifractal theory, free from spurious logarithms recently discussed in [U. Frisch, M. Martins Afonso, A. Mazzino, V. Yakhot, J. Fluid Mech. 542 (2005) 97] is also obtained.
Trajectory versus probability density entropy
Bologna, Mauro; Grigolini, Paolo; Karagiorgis, Markos; Rosa, Angelo
2001-07-01
We show that the widely accepted conviction that a connection can be established between the probability density entropy and the Kolmogorov-Sinai (KS) entropy is questionable. We adopt the definition of density entropy as a functional of a distribution density whose time evolution is determined by a transport equation, conceived as the only prescription to use for the calculation. Although the transport equation is built up for the purpose of affording a picture equivalent to that stemming from trajectory dynamics, no direct use of trajectory time evolution is allowed, once the transport equation is defined. With this definition in mind we prove that the detection of a time regime of increase of the density entropy with a rate identical to the KS entropy is possible only in a limited number of cases. The proposals made by some authors to establish a connection between the two entropies in general, violate our definition of density entropy and imply the concept of trajectory, which is foreign to that of density entropy.
Investigation of density inhomogeneities in liquids by positron annihilation
The case of positronium diffusion and annihilation in micellar solutions as well as in liquid normal alkanes is discussed. The traps are assumed to be the structural sparse density regions in these liquids. The traps in micellar solutions are the micelles, in alkanes they are found around the terminal -CH3 groups. The surface tension inside the micellar core (one of the basic parameters of micellization) is determined around the site of o-Ps solubilization. The o-Ps diffusivity parameters are determined in both systems. (K.A.) 48 refs.; 4 figs
Chaos for Liouville probability densities
Schack, R
1995-01-01
Using the method of symbolic dynamics, we show that a large class of classical chaotic maps exhibit exponential hypersensitivity to perturbation, i.e., a rapid increase with time of the information needed to describe the perturbed time evolution of the Liouville density, the information attaining values that are exponentially larger than the entropy increase that results from averaging over the perturbation. The exponential rate of growth of the ratio of information to entropy is given by the Kolmogorov-Sinai entropy of the map. These findings generalize and extend results obtained for the baker's map [R. Schack and C. M. Caves, Phys. Rev. Lett. 69, 3413 (1992)].
Modulation Based on Probability Density Functions
Williams, Glenn L.
2009-01-01
A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.
On Explicit Probability Densities Associated with Fuss-Catalan Numbers
Liu, Dang-Zheng; Song, Chunwei; Wang, Zheng-Dong
2010-01-01
In this note we give explicitly a family of probability densities, the moments of which are Fuss-Catalan numbers. The densities appear naturally in random matrices, free probability and other contexts.
Advantages of the probability amplitude over the probability density in quantum mechanics
Kurihara, Yoshimasa; Quach, Nhi My Uyen
2013-01-01
We discuss reasons why a probability amplitude, which becomes a probability density after squaring, is considered as one of the most basic ingredients of quantum mechanics. First, the Heisenberg/Schrodinger equation, an equation of motion in quantum mechanics, describes a time evolution of the probability amplitude rather than of a probability density. There may be reasons why dynamics of a physical system are described by amplitude. In order to investigate one role of the probability amplitu...
Belikov, Alexander; Silk, Joseph
2013-01-01
Dark matter annihilation is proportional to the square of the density and is especially efficient in places of highest concentration of dark matter, such as dark matter spikes. The spikes are formed as a result of contraction of the dark matter density profile caused by adiabatic growth of a supermassive black hole at the center of the dark matter halo or subhalo. We revisit the relation between the properties and mass functions of dark matter halos and spikes, and propose alternative models ...
Asymptotic probability density functions in turbulence
Minotti, F. O.; Speranza, E.
2007-01-01
A formalism is presented to obtain closed evolution equations for asymptotic probability distribution functions of turbulence magnitudes. The formalism is derived for a generic evolution equation, so that the final result can be easily applied to rather general problems. Although the approximation involved cannot be ascertained a priori, we show that application of the formalism to well known problems gives the correct results.
Probability density of quantum expectation values
Campos Venuti, L., E-mail: lcamposv@usc.edu; Zanardi, P.
2013-10-30
We consider the quantum expectation value A=〈ψ|A|ψ〉 of an observable A over the state |ψ〉. We derive the exact probability distribution of A seen as a random variable when |ψ〉 varies over the set of all pure states equipped with the Haar-induced measure. To illustrate our results we compare the exact predictions for few concrete examples with the concentration bounds obtained using Levy's lemma. We also comment on the relevance of the central limit theorem and finally draw some results on an alternative statistical mechanics based on the uniform measure on the energy shell. - Highlights: • We compute the probability distribution of quantum expectation values for states sampled uniformly. • As a special case we consider in some detail the degenerate case where A is a one-dimensional projector. • We compare the concentration results obtained using Levy's lemma with the exact values obtained using our exact formulae. • We comment on the possibility of a Central Limit Theorem and show approach to Gaussian for a few physical operators. • Some implications of our results for the so-called “Quantum Microcanonical Equilibration” (Refs. [5–9]) are derived.
Annihilation Radiation Gauge for Relative Density and Multiphase Fluid Monitoring
Vidal A.
2014-03-01
Full Text Available The knowledge of the multi-phase flow parameters are important for the petroleum industry, specifically during the transport in pipelines and network related to exploitation’s wells. Crude oil flow is studied by Monte Carlo simulation and experimentally to determine transient liquid phase in a laboratory system. Relative density and fluid phase time variation is monitored employing a fast nuclear data acquisition setup that includes two large volume BaF2 scintillator detectors coupled to an electronic chain and data display in a LabView® environment. Fluid parameters are determined by the difference in count rate of coincidence pulses. The operational characteristics of the equipment indicate that 2 % deviation in the CCR corresponds to a variation, on average, of 20 % in the fraction of liquid of the multiphase fluid.
Asymptotic Theory for the Probability Density Functions in Burgers Turbulence
Weinan, E; Eijnden, Eric Vanden
1999-01-01
A rigorous study is carried out for the randomly forced Burgers equation in the inviscid limit. No closure approximations are made. Instead the probability density functions of velocity and velocity gradient are related to the statistics of quantities defined along the shocks. This method allows one to compute the anomalies, as well as asymptotics for the structure functions and the probability density functions. It is shown that the left tail for the probability density function of the velocity gradient has to decay faster than $|\\xi|^{-3}$. A further argument confirms the prediction of E et al., Phys. Rev. Lett. {\\bf 78}, 1904 (1997), that it should decay as $|\\xi|^{-7/2}$.
Validating Forecasts of the Joint Probability Density of Bond Yields:...
Egorov, Alexei V.; Yongmiao Hong; Haitao Li
2013-01-01
Most existing empirical studies on affine term structure models (ATSMs) have mainly focused on in-sample goodness-of-fit of historical bond yields and ignored out-of-sample forecast of future bond yields. Using an omnibus nonparametric procedure for density forecast evaluation in a continuous-time framework, we provide probably the first comprehensive empirical analysis of the out-of-sample performance of ATSMs in forecasting the joint conditional probability density of bond yields. We find t...
Hilbert Space of Probability Density Functions Based on Aitchison Geometry
J. J. EGOZCUE; J. L. D(I)AZ-BARRERO; V. PAWLOWSKY-GLAHN
2006-01-01
The set of probability functions is a convex subset of L1 and it does not have a linear space structure when using ordinary sum and multiplication by real constants. Moreover, difficulties arise when dealing with distances between densities. The crucial point is that usual distances are not invariant under relevant transformations of densities. To overcome these limitations, Aitchison's ideas on compositional data analysis are used, generalizing perturbation and power transformation, as well as the Aitchison inner product, to operations on probability density functions with support on a finite interval. With these operations at hand, it is shown that the set of bounded probability density functions on finite intervals is a pre-Hilbert space. A Hilbert space of densities, whose logarithm is square-integrable, is obtained as the natural completion of the pre-Hilbert space.
On positron annihilation in zinc
The purpose of this work is to understand Mogensen's and Petersen's positron annihilation curves for zinc. Mijnarends approach is used as an auxiliary method of localizing inhomogeneities of the electronic density in momentum space, as defined in the paper. Evidence is found for a new effect consisting of a strong enhancement of the annihilation probability in the lenses obtained by the intersection of the Fermi surface with HMC surfaces. This effect, not the anisotropy of the Fermi surface, is the main reason for the anisotropy of the annihilation curves. (orig.)
The Probability Distribution Function of Column Density in Molecular Clouds
Vázquez-Semadeni, E; Vazquez-Semadeni, Enrique; Garcia, Nieves
2001-01-01
We discuss the probability distribution function (PDF) of column density resulting from density fields with lognormal PDFs, applicable to molecular clouds. For magnetic and non-magnetic numerical simulations of compressible, isothermal turbulence, we show that the density autocorrelation function (ACF) decays over short distances compared to the simulation size. The density "events" along a line of sight can be assumed to be independent over distances larger than this, and the Central Limit Theorem should be applicable. However, using random realizations of lognormal fields, we show that the convergence to a Gaussian shape is extremely slow in the high-density tail, and thus the column density PDF is not expected to exhibit a unique functional shape, but to transit instead from a lognormal to a Gaussian form as the column length increases, with decreasing variance. For intermediate path lengths, the column density PDF assumes a nearly exponential decay. For cases with density contrasts of $10^4$, comparable t...
Does the probability density imply the equation of motion?
Full text: The laws of physics dictate the evolution of matter and radiation. Quantum mechanics postulates that the matter or radiation is associated with a field whose magnitude is interpreted as the probability density, which is the only observable quantity. In general this field is either a single-component or multi-component complex scalar field, whose laws of evolution may be expressed in the form of partial differential equations. One may ask does the probability density of the complex scalar field imply the evolution of the field? Here we answer this fundamental question by examining a means for measuring the equation of motion of a single-component complex scalar field associated with a non-dissipative and nonlinear system, given measurements of the probability density. Applications of this formalism, to a number of systems in condensed matter physics, will be discussed
Probability density function modeling for sub-powered interconnects
Pater, Flavius; Amaricǎi, Alexandru
2016-06-01
This paper proposes three mathematical models for reliability probability density function modeling the interconnect supplied at sub-threshold voltages: spline curve approximations, Gaussian models,and sine interpolation. The proposed analysis aims at determining the most appropriate fitting for the switching delay - probability of correct switching for sub-powered interconnects. We compare the three mathematical models with the Monte-Carlo simulations of interconnects for 45 nm CMOS technology supplied at 0.25V.
LOFT fuel-rod-transient DNB probability density function studies
Significantly improved DNB safety margins were calculated for LOFT reactor fuel rods by use of probability density functions (PDF) for transient MDNBR. Applicability and sensitivity studies determined that the PDF and resulting nominal MDNBR limits are stable, applicable over a wide range of potential input parameters, and applicable to most transients
Rubaszek, A. [Polska Akademia Nauk, Wroclaw (Poland). Inst. Niskich Temperatur i Badan Strukturalnych; Szotek, Z.; Temmerman, W.M. [Daresbury Lab. (United Kingdom)
2001-07-01
To interpret positron annihilation data in solids in terms of the electron momentum density and electron charge distribution, both the electron-positron interaction and positron wave function have to be considered explicitly. In the present work we discuss the effect of the shape of the positron wave function on the calculated positron annihilation rates in a variety of solids, for different types of electrons (core, s, p, d, f). We show that the form of the positron distribution in the Wigner-Seitz cell has a crucial effect on the resulting core electron contribution to the positron annihilation characteristics. The same is observed for the localised d and f electrons in transition metals Finally we study the influence of the positron wave function on the electron-positron momentum density in elemental Si. (orig.)
Probability Density Estimation by Decomposition of Correlation Integral
Jiřina, Marcel; Jiřina jr., M.
- : ISRST, 2008 - (Prasad, B.; Sinha, P.; Ram, A.; Kerre, E.), s. 113-119 ISBN 978-1-60651-000-1. [AIPR 2008. International Conference on Artificial Intelligence and Pattern Recognition. Orlando (US), 07.07.2008-10.07.2008] Institutional research plan: CEZ:AV0Z10300504 Keywords : correlation integral * decomposition of correlation integral * probability density estimation * polynomial approximation * classifier Subject RIV: BA - General Mathematics
Probability Density Estimation by Decomposition of Correlation Integral
Jiřina, Marcel; Jiřina jr., M.
-: ISRST, 2008 - (Prasad, B.; Sinha, P.; Ram, A.; Kerre, E.), s. 113-119 ISBN 978-1-60651-000-1. [AIPR 2008. International Conference on Artificial Intelligence and Pattern Recognition. Orlando (US), 07.07.2008-10.07.2008] Institutional research plan: CEZ:AV0Z10300504 Keywords : correlation integral * decomposition of correlation integral * probability density estimation * polynomial approximation * classifier Subject RIV: BA - General Mathematics
LOFT fuel rod transient DNB probability density function studies
Significantly improved calculated DNB safety margins were defined by the development and use of probability density functions (PDF) for transient MDNBR nuclear fuel rods in the Loss of Fluid Test (LOFT) reactor. Calculations for limiting transients and response surface methods were used thereby including transient interactions and trip uncertainties in the MDNBR PDF. Applicability and sensitivity studies determined that the PDF and resulting nominal MDNBR limits are stable, applicable over a wide range of potential input parameters, and applicable to most transients
Vehicle Detection Based on Probability Hypothesis Density Filter
Zhang, Feihu; Knoll, Alois
2016-01-01
In the past decade, the developments of vehicle detection have been significantly improved. By utilizing cameras, vehicles can be detected in the Regions of Interest (ROI) in complex environments. However, vision techniques often suffer from false positives and limited field of view. In this paper, a LiDAR based vehicle detection approach is proposed by using the Probability Hypothesis Density (PHD) filter. The proposed approach consists of two phases: the hypothesis generation phase to detect potential objects and the hypothesis verification phase to classify objects. The performance of the proposed approach is evaluated in complex scenarios, compared with the state-of-the-art. PMID:27070621
On singular probability densities generated by extremal dynamics
Garcia, Guilherme J. M.; Dickman, Ronald
2003-01-01
Extremal dynamics is the mechanism that drives the Bak-Sneppen model into a (self-organized) critical state, marked by a singular stationary probability density $p(x)$. With the aim of understanding this phenomenon, we study the BS model and several variants via mean-field theory and simulation. In all cases, we find that $p(x)$ is singular at one or more points, as a consequence of extremal dynamics. Furthermore we show that the extremal barrier $x_i$ always belongs to the `prohibited' inter...
Vehicle Detection Based on Probability Hypothesis Density Filter.
Zhang, Feihu; Knoll, Alois
2016-01-01
In the past decade, the developments of vehicle detection have been significantly improved. By utilizing cameras, vehicles can be detected in the Regions of Interest (ROI) in complex environments. However, vision techniques often suffer from false positives and limited field of view. In this paper, a LiDAR based vehicle detection approach is proposed by using the Probability Hypothesis Density (PHD) filter. The proposed approach consists of two phases: the hypothesis generation phase to detect potential objects and the hypothesis verification phase to classify objects. The performance of the proposed approach is evaluated in complex scenarios, compared with the state-of-the-art. PMID:27070621
Probability density function transformation using seeded localized averaging
Seeded Localized Averaging (SLA) is a spectrum acquisition method that averages pulse-heights in dynamic windows. SLA sharpens peaks in the acquired spectra. This work investigates the transformation of the original probability density function (PDF) in the process of applying SLA procedure. We derive an analytical expression for the resulting probability density function after an application of SLA. In addition, we prove the following properties: 1) for symmetric distributions, SLA preserves both the mean and symmetry. 2) for uni-modal symmetric distributions, SLA reduces variance, sharpening the distributions peak. Our results are the first to prove these properties, reinforcing past experimental observations. Specifically, our results imply that in the typical case of a spectral peak with Gaussian PDF the full width at half maximum (FWHM) of the transformed peak becomes narrower even with averaging of only two pulse-heights. While the Gaussian shape is no longer preserved, our results include an analytical expression for the resulting distribution. Examples of the transformation of other PDFs are presented. (authors)
INTERACTIVE VISUALIZATION OF PROBABILITY AND CUMULATIVE DENSITY FUNCTIONS
Potter, Kristin
2012-01-01
The probability density function (PDF), and its corresponding cumulative density function (CDF), provide direct statistical insight into the characterization of a random process or field. Typically displayed as a histogram, one can infer probabilities of the occurrence of particular events. When examining a field over some two-dimensional domain in which at each point a PDF of the function values is available, it is challenging to assess the global (stochastic) features present within the field. In this paper, we present a visualization system that allows the user to examine two-dimensional data sets in which PDF (or CDF) information is available at any position within the domain. The tool provides a contour display showing the normed difference between the PDFs and an ansatz PDF selected by the user and, furthermore, allows the user to interactively examine the PDF at any particular position. Canonical examples of the tool are provided to help guide the reader into the mapping of stochastic information to visual cues along with a description of the use of the tool for examining data generated from an uncertainty quantification exercise accomplished within the field of electrophysiology.
Probability Density and CFAR Threshold Estimation for Hyperspectral Imaging
Clark, G A
2004-09-21
The work reported here shows the proof of principle (using a small data set) for a suite of algorithms designed to estimate the probability density function of hyperspectral background data and compute the appropriate Constant False Alarm Rate (CFAR) matched filter decision threshold for a chemical plume detector. Future work will provide a thorough demonstration of the algorithms and their performance with a large data set. The LASI (Large Aperture Search Initiative) Project involves instrumentation and image processing for hyperspectral images of chemical plumes in the atmosphere. The work reported here involves research and development on algorithms for reducing the false alarm rate in chemical plume detection and identification algorithms operating on hyperspectral image cubes. The chemical plume detection algorithms to date have used matched filters designed using generalized maximum likelihood ratio hypothesis testing algorithms [1, 2, 5, 6, 7, 12, 10, 11, 13]. One of the key challenges in hyperspectral imaging research is the high false alarm rate that often results from the plume detector [1, 2]. The overall goal of this work is to extend the classical matched filter detector to apply Constant False Alarm Rate (CFAR) methods to reduce the false alarm rate, or Probability of False Alarm P{sub FA} of the matched filter [4, 8, 9, 12]. A detector designer is interested in minimizing the probability of false alarm while simultaneously maximizing the probability of detection P{sub D}. This is summarized by the Receiver Operating Characteristic Curve (ROC) [10, 11], which is actually a family of curves depicting P{sub D} vs. P{sub FA}parameterized by varying levels of signal to noise (or clutter) ratio (SNR or SCR). Often, it is advantageous to be able to specify a desired P{sub FA} and develop a ROC curve (P{sub D} vs. decision threshold r{sub 0}) for that case. That is the purpose of this work. Specifically, this work develops a set of algorithms and MATLAB
Munehisa Kasuya
1999-01-01
We define downward price rigidity as the state in which the speed at which prices fall is slower than that in which they rise. Based on this definition, we examine the downward price rigidity of each item that constitutes the core CPI of Japan. That is, according to the results of fractional integration tests on price changes of individual items, we estimate probability density functions in the stationary case and estimate spatial density functions in the nonstationary case. We also test thei...
Chu, Xiaoyong; Hambye, Thomas
2016-01-01
Motivated by the hypothesis that dark matter self-interactions provide a solution to the small-scale structure formation problems, we investigate the possibilities that the relic density of a self-interacting dark matter candidate can proceed from the thermal freeze-out of annihilations into Standard Model particles. We find that scalar and Majorana dark matter in the mass range of $10-500$ MeV, coupled to a slightly heavier massive gauge boson, are the only possible candidates in agreement with multiple current experimental constraints. Here dark matter annihilations take place at a much slower rate than the self-interactions simply because the interaction connecting the Standard Model and the dark matter sectors is small. We also discuss prospects of establishing or excluding these two scenarios in future experiments.
Impact of SUSY-QCD corrections on neutralino-stop co-annihilation and the neutralino relic density
Harz, Julia [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Herrmann, Bjoern [Savoie Univ./CNRS, Annecy-le-Vieux (France). LAPTh; Klasen, Michael [Muenster Univ. (Germany). Inst. fuer Theoretische Physik 1; Kovarik, Karol [Karlsruher Institut fuer Technologie, Karlsruhe (Germany). Inst. fuer Theoretische Physik; Le Boulc' h, Quentin [Grenoble Univ./CNRS-IN2P3/INPG, Grenoble (France). Lab. de Physique Subatomique et de Cosmologie
2013-02-15
We have calculated the full O({alpha}{sub s}) supersymmetric QCD corrections to neutralino-stop coannihilation into electroweak vector and Higgs bosons within the Minimal Supersymmetric Standard Model (MSSM).We performed a parameter study within the phenomenological MSSM and demonstrated that the studied co-annihilation processes are phenomenologically relevant, especially in the context of a 126 GeV Higgs-like particle. By means of an example scenario we discuss the effect of the full next-to-leading order corrections on the co-annihilation cross section and show their impact on the predicted neutralino relic density. We demonstrate that the impact of these corrections on the cosmologically preferred region of parameter space is larger than the current experimental uncertainty of WMAP data.
Interactive design of probability density functions for shape grammars
Dang, Minh
2015-11-02
A shape grammar defines a procedural shape space containing a variety of models of the same class, e.g. buildings, trees, furniture, airplanes, bikes, etc. We present a framework that enables a user to interactively design a probability density function (pdf) over such a shape space and to sample models according to the designed pdf. First, we propose a user interface that enables a user to quickly provide preference scores for selected shapes and suggest sampling strategies to decide which models to present to the user to evaluate. Second, we propose a novel kernel function to encode the similarity between two procedural models. Third, we propose a framework to interpolate user preference scores by combining multiple techniques: function factorization, Gaussian process regression, autorelevance detection, and l1 regularization. Fourth, we modify the original grammars to generate models with a pdf proportional to the user preference scores. Finally, we provide evaluations of our user interface and framework parameters and a comparison to other exploratory modeling techniques using modeling tasks in five example shape spaces: furniture, low-rise buildings, skyscrapers, airplanes, and vegetation.
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Hoft, Jan; Weber, J. K.; Raut, E.; Larson, Vincent E.; Wang, Minghuai; Rasch, Philip J.
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.
Numerical methods for high-dimensional probability density function equations
Cho, H.; Venturi, D.; Karniadakis, G. E.
2016-01-01
In this paper we address the problem of computing the numerical solution to kinetic partial differential equations involving many phase variables. These types of equations arise naturally in many different areas of mathematical physics, e.g., in particle systems (Liouville and Boltzmann equations), stochastic dynamical systems (Fokker-Planck and Dostupov-Pugachev equations), random wave theory (Malakhov-Saichev equations) and coarse-grained stochastic systems (Mori-Zwanzig equations). We propose three different classes of new algorithms addressing high-dimensionality: The first one is based on separated series expansions resulting in a sequence of low-dimensional problems that can be solved recursively and in parallel by using alternating direction methods. The second class of algorithms relies on truncation of interaction in low-orders that resembles the Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) framework of kinetic gas theory and it yields a hierarchy of coupled probability density function equations. The third class of algorithms is based on high-dimensional model representations, e.g., the ANOVA method and probabilistic collocation methods. A common feature of all these approaches is that they are reducible to the problem of computing the solution to high-dimensional equations via a sequence of low-dimensional problems. The effectiveness of the new algorithms is demonstrated in numerical examples involving nonlinear stochastic dynamical systems and partial differential equations, with up to 120 variables.
Parameterizing deep convection using the assumed probability density function method
R. L. Storer
2014-06-01
Full Text Available Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.
The stationary probability density of a class of bounded Markov processes
Ramli, Muhamad Azfar; Leng, Gerard
2010-01-01
In this paper we generalize a bounded Markov process, described by Stoyanov and Pacheco-González for a class of transition probability functions. A recursive integral equation for the probability density of these bounded Markov processes is derived and the stationary probability density is obtained by solving an equivalent differential equation. Examples of stationary densities for different transition probability functions are given and an application for designing a roboti...
Fitting age-specific fertility rates by a skew-symmetric probability density function
Mazzuco, Stefano; Scarpa, Bruno
2011-01-01
Mixture probability density functions had recently been proposed to describe some fertility patterns characterized by a bi-modal shape. These mixture probability density functions appear to be adequate when the fertility pattern is actually bi-modal but less useful when the shape of age-specific fertility rates is unimodal. A further model is proposed based on skew-symmetric probability density functions. This model is both more parsimonious than mixture distributions and more flexible, sh...
Power-law tails in probability density functions of molecular cloud column density
Brunt, Chris
2015-01-01
Power-law tails are often seen in probability density functions (PDFs) of molecular cloud column densities, and have been attributed to the effect of gravity. We show that extinction PDFs of a sample of five molecular clouds obtained at a few tenths of a parsec resolution, probing extinctions up to A$_{{\\mathrm{V}}}$ $\\sim$ 10 magnitudes, are very well described by lognormal functions provided that the field selection is tightly constrained to the cold, molecular zone and that noise and foreground contamination are appropriately accounted for. In general, field selections that incorporate warm, diffuse material in addition to the cold, molecular material will display apparent core+tail PDFs. The apparent tail, however, is best understood as the high extinction part of a lognormal PDF arising from the cold, molecular part of the cloud. We also describe the effects of noise and foreground/background contamination on the PDF structure, and show that these can, if not appropriately accounted for, induce spurious ...
Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Liu, W. F.;
2013-01-01
Estimation of extreme response and failure probability of structures subjected to ultimate design loads is essential for structural design of wind turbines according to the new standard IEC61400-1. This task is focused on in the present paper in virtue of probability density evolution method (PDEM......), which underlies the schemes of random vibration analysis and structural reliability assessment. The short-term rare failure probability of 5-mega-watt wind turbines, for illustrative purposes, in case of given mean wind speeds and turbulence levels is investigated through the scheme of extreme value...... distribution instead of any other approximate schemes of fitted distribution currently used in statistical extrapolation techniques. Besides, the comparative studies against the classical fitted distributions and the standard Monte Carlo techniques are carried out. Numerical results indicate that PDEM exhibits...
On the discretization of probability density functions and the continuous Rényi entropy
Diógenes Campos
2015-12-01
On the basis of second mean-value theorem (SMVT) for integrals, a discretization method is proposed with the aim of representing the expectation value of a function with respect to a probability density function in terms of the discrete probability theory. This approach is applied to the continuous Rényi entropy, and it is established that a discrete probability distribution can be associated to it in a very natural way. The probability density functions for the linear superposition of two coherent states is used for developing a representative example.
Microdefects and electron densities in NiTi shape memory alloys studied by positron annihilation
HU Yi-feng; DENG Wen; HAO Wen-bo; YUE Li; HUANG Le; HUANG Yu-yang; XIONG Liang-yue
2006-01-01
The microdefects and free electron densities in B2, R and B19' phases of Nis0.78Ti49.22 alloy were studied by positron lifetime measurements. Comparing the lifetime parameters of the Nis0.78Ti49.22 alloy measured at 295 K and 225 K, it is found that the free electron density of the R phase is lower than that of the B2 phase; the open volume of the defects of the R phase is larger, while the concentration of these defects is lower than that of the B2 phase. The Nis0.78Ti49.22 alloy exhibits B19' phase at 115 K. In comparison with the R phase, the free electron density of the B19' phase increases, the open volume of the defects of the B19' phase reduces, and the concentration of these defects increases. The microdefects and the free electron density play an important role during the multi-step transformations (B2→R→B19' phase transformations) in Nis0.78Ti49.22 alloy with the decrease of temperature.
Superposition rule and entanglement in diagonal and probability representations of density states
Man'ko, Vladimir I.; Marmo, Giuseppe; Sudarshan, E C George
2009-01-01
The quasidistributions corresponding to the diagonal representation of quantum states are discussed within the framework of operator-symbol construction. The tomographic-probability distribution describing the quantum state in the probability representation of quantum mechanics is reviewed. The connection of the diagonal and probability representations is discussed. The superposition rule is considered in terms of the density-operator symbols. The separability and entanglement properties of m...
Velimirovic, Lazar; Peric, Zoran; Stankovic, Miomir; Simic, Nikola
2012-01-01
In this paper both piecewise linear and piecewise uniform approximation of probability density function are performed. For the probability density function approximated in these ways, a compressor function is formed. On the basis of compressor function formed in this way, piecewise linear and piecewise uniform companding quantizer are designed. Design of these companding quantizer models is performed for the Laplacian source at the entrance of the quantizer. The performance estimate of the pr...
无
2010-01-01
To analyze the effect of basic variable on failure probability in reliability analysis,a moment-independent importance measure of the basic random variable is proposed,and its properties are analyzed and verified.Based on this work,the importance measure of the basic variable on the failure probability is compared with that on the distribution density of the response.By use of the probability density evolution method,a solution is established to solve two importance measures,which can efficiently avoid the difficulty in solving the importance measures.Some numerical examples and engineering examples are used to demonstrate the proposed importance measure on the failure probability and that on the distribution density of the response.The results show that the proposed importance measure can effectively describe the effect of the basic variable on the failure probability from the distribution density of the basic variable.Additionally,the results show that the established solution on the probability density evolution is efficient for the importance measures.
A note on the existence of transition probability densities for L\\'evy processes
Knopova, V.; Schilling, R.L.
2010-01-01
We prove several necessary and sufficient conditions for the existence of (smooth) transition probability densities for L\\'evy processes and isotropic L\\'evy processes. Under some mild conditions on the characteristic exponent we calculate the asymptotic behaviour of the transition density as $t\\to 0$ and $t\\to\\infty$ and show a ratio-limit theorem.
Heisler, Lori; Goffman, Lisa
2016-01-01
A word learning paradigm was used to teach children novel words that varied in phonotactic probability and neighborhood density. The effects of frequency and density on speech production were examined when phonetic forms were nonreferential (i.e., when no referent was attached) and when phonetic forms were referential (i.e., when a referent was…
Pousga Kabore; Husam Baki; Hong Yue; Hong Wang
2005-01-01
This paper presents a linearized approach for the controller design of the shape of output probability density functions for general stochastic systems. A square root approximation to an output probability density function is realized by a set of B-spline functions. This generally produces a nonlinear state space model for the weights of the B-spline approximation. A linearized model is therefore obtained and embedded into a performance function that measures the tracking error of the output probability density function with respect to a given distribution. By using this performance function as a Lyapunov function for the closed loop system, a feedback control input has been obtained which guarantees closed loop stability and realizes perfect tracking. The algorithm described in this paper has been tested on a simulated example and desired results have been achieved.
Joint Delay Doppler Probability Density Functions for Air-to-Air Channels
Michael Walter
2014-01-01
Full Text Available Recent channel measurements indicate that the wide sense stationary uncorrelated scattering assumption is not valid for air-to-air channels. Therefore, purely stochastic channel models cannot be used. In order to cope with the nonstationarity a geometric component is included. In this paper we extend a previously presented two-dimensional geometric stochastic model originally developed for vehicle-to-vehicle communication to a three-dimensional air-to-air channel model. Novel joint time-variant delay Doppler probability density functions are presented. The probability density functions are derived by using vector calculus and parametric equations of the delay ellipses. This allows us to obtain closed form mathematical expressions for the probability density functions, which can then be calculated for any delay and Doppler frequency at arbitrary times numerically.
Density probability distribution functions of diffuse gas in the Milky Way
Berkhuijsen, E M
2008-01-01
In a search for the signature of turbulence in the diffuse interstellar medium in gas density distributions, we determined the probability distribution functions (PDFs) of the average volume densities of the diffuse gas. The densities were derived from dispersion measures and HI column densities towards pulsars and stars at known distances. The PDFs of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b|=5 degrees are considered separately. The PDF of at high |b| is twice as wide as that at low |b|. The width of the PDF of the DIG is about 30 per cent smaller than that of the warm HI at the same latitudes. The results reported here provide strong support for the existence of a lognormal density PDF in the diffuse ISM, consistent with a turbulent origin of density structure in the diffuse gas.
Modelling the Probability Density Function of IPTV Traffic Packet Delay Variation
Michal Halas
2012-01-01
Full Text Available This article deals with modelling the Probability density function of IPTV traffic packet delay variation. The use of this modelling is in an efficient de-jitter buffer estimation. When an IP packet travels across a network, it experiences delay and its variation. This variation is caused by routing, queueing systems and other influences like the processing delay of the network nodes. When we try to separate these at least three types of delay variation, we need a way to measure these types separately. This work is aimed to the delay variation caused by queueing systems which has the main implications to the form of the Probability density function.
An analytical formula of the optical–optical double-resonance multi-photon ionization (OODR-MPI) probability is derived from the time-dependent density-matrix equations that describe the interaction of photon and material. Based on the formula, the variation of the multiphoton ionization (MPI) probability with laser resonance detuning, Rabi frequency, laser pulse duration and ionization rate is investigated theoretically. It is shown that the MPI probability will decrease with the increase of laser resonance detuning, to some extent, to zero. The influence of the pump laser resonance detuning on the ionization probability is more important with respect to the probe laser. It not only influences Rabi frequency for saturation, but also the saturation value of MPI probability. The MPI probability will increase with Rabi frequency, laser pulse duration and ionization rate. It is also found that though the variation of the populations in the ground, the first and the second resonance states is different at the beginning of laser radiation, but they will still decrease to zero as the time goes on. It is then that the ionization probability gets the maximum value. Thus long laser pulse duration and high laser intensity are in favor for improving the MPI probability. These theoretical research results can provide a useful guide for the practical application of OODR-MPI spectroscopy. - Highlights: • An analytical expression of OODR-MPI probability has been derived. • MPI probability decreases with the increase of laser resonance detuning. • The influence of pump laser on the MPI probability is larger than probe laser. • Larger laser pulse duration and intensity are in favor of higher MPI probability
Unification of Field Theory and Maximum Entropy Methods for Learning Probability Densities
Kinney, Justin B
2014-01-01
Bayesian field theory and maximum entropy are two methods for learning smooth probability distributions (a.k.a. probability densities) from finite sampled data. Both methods were inspired by statistical physics, but the relationship between them has remained unclear. Here I show that Bayesian field theory subsumes maximum entropy density estimation. In particular, the most common maximum entropy methods are shown to be limiting cases of Bayesian inference using field theory priors that impose no boundary conditions on candidate densities. This unification provides a natural way to test the validity of the maximum entropy assumption on one's data. It also provides a better-fitting nonparametric density estimate when the maximum entropy assumption is rejected.
Hydrogen is the main constitute of plasmas in HANBIT magnetic mirror device, therefore, measurement of the emission from excited levels of hydrogen atoms is an important diagnostic tool. From the emissivity of Hα radiation one can derive quantities such as the neutral hydrogen density and the source rate. An unbiased and consistent probability theory based approach within the framework of Bayesian inference is applied to the reconstruction of Hα emissivity profiles and hydrogen neutral density profiles in HANBIT magnetic mirror device
Compound kernel estimates for the transition probability density of a L\\'evy process in $\\rn$
Knopova, V.
2013-01-01
We construct in the small-time setting the upper and lower estimates for the transition probability density of a L\\'evy process in $\\rn$. Our approach relies on the complex analysis technique and the asymptotic analysis of the inverse Fourier transform of the characteristic function of the respective process.
Influence of Phonotactic Probability/Neighbourhood Density on Lexical Learning in Late Talkers
MacRoy-Higgins, Michelle; Schwartz, Richard G.; Shafer, Valerie L.; Marton, Klara
2013-01-01
Background: Toddlers who are late talkers demonstrate delays in phonological and lexical skills. However, the influence of phonological factors on lexical acquisition in toddlers who are late talkers has not been examined directly. Aims: To examine the influence of phonotactic probability/neighbourhood density on word learning in toddlers who were…
Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function
Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.
2011-01-01
In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.
张路平; 王鲁平; 李飚; 赵明
2015-01-01
In order to improve the performance of the probability hypothesis density (PHD) algorithm based particle filter (PF) in terms of number estimation and states extraction of multiple targets, a new probability hypothesis density filter algorithm based on marginalized particle and kernel density estimation is proposed, which utilizes the idea of marginalized particle filter to enhance the estimating performance of the PHD. The state variables are decomposed into linear and non-linear parts. The particle filter is adopted to predict and estimate the nonlinear states of multi-target after dimensionality reduction, while the Kalman filter is applied to estimate the linear parts under linear Gaussian condition. Embedding the information of the linear states into the estimated nonlinear states helps to reduce the estimating variance and improve the accuracy of target number estimation. The meanshift kernel density estimation, being of the inherent nature of searching peak value via an adaptive gradient ascent iteration, is introduced to cluster particles and extract target states, which is independent of the target number and can converge to the local peak position of the PHD distribution while avoiding the errors due to the inaccuracy in modeling and parameters estimation. Experiments show that the proposed algorithm can obtain higher tracking accuracy when using fewer sampling particles and is of lower computational complexity compared with the PF-PHD.
Federrath, Christoph; Schmidt, Wolfram
2008-01-01
The probability density function (PDF) of the gas density in turbulent supersonic flows is investigated with high-resolution numerical simulations. In a systematic study, we compare the density statistics of compressible turbulence driven by the usually adopted solenoidal forcing (divergence-free) and by compressive forcing (curl-free). Our results are in agreement with studies using solenoidal forcing. However, compressive forcing yields a significantly broader density distribution with standard deviation ~3 times larger at the same rms Mach number. The standard deviation-Mach number relation used in analytical models of star formation is reviewed and a modification of the existing expression is proposed, which takes into account the ratio of solenoidal and compressive modes of the turbulence forcing.
Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows
Shih, Tsan-Hsing; Liu, Nan-Suey
2012-01-01
In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.
Analytical Formulation of the Single-visit Completeness Joint Probability Density Function
Garrett, Daniel; Savransky, Dmitry
2016-09-01
We derive an exact formulation of the multivariate integral representing the single-visit obscurational and photometric completeness joint probability density function for arbitrary distributions for planetary parameters. We present a derivation of the region of nonzero values of this function, which extends previous work, and discuss the time and computational complexity costs and benefits of the method. We present a working implementation and demonstrate excellent agreement between this approach and Monte Carlo simulation results.
Falk, Anne Katrine Vinther; Gryning, Sven-Erik
In this model for atmospheric dispersion particles are simulated by the Langevin Equation, which is a stochastic differential equation. It uses the probability density function (PDF) of the vertical velocity fluctuations as input. The PDF is constructed as an expansion after Hermite polynomials. In...... several previous works where the PDF was expressed this way, further use was hampered by the fact that the PDF takes negative values for a range of velocities. This problem is overcome in the present formulation...
Osmar Abílio de Carvalho Júnior; Luz Marilda de Moraes Maciel; Ana Paula Ferreira de Carvalho; Renato Fontes Guimarães; Cristiano Rosa Silva; Roberto Arnaldo Trancoso Gomes; Nilton Correia Silva
2014-01-01
Speckle noise (salt and pepper) is inherent to synthetic aperture radar (SAR), which causes a usual noise-like granular aspect and complicates the image classification. In SAR image analysis, the spatial information might be a particular benefit for denoising and mapping classes characterized by a statistical distribution of the pixel intensities from a complex and heterogeneous spectral response. This paper proposes the Probability Density Components Analysis (PDCA), a new alternative that c...
Analytical formulation of the single-visit completeness joint probability density function
Garrett, Daniel
2016-01-01
We derive an exact formulation of the multivariate integral representing the single-visit obscurational and photometric completeness joint probability density function for arbitrary distributions for planetary parameters. We present a derivation of the region of nonzero values of this function which extends previous work, and discuss time and computational complexity costs and benefits of the method. We present a working implementation, and demonstrate excellent agreement between this approach and Monte Carlo simulation results
Energy Quantization and Probability Density of Electron in Intense-Field-Atom Interactions
敖淑艳; 程太旺; 李晓峰; 吴令安; 付盘铭
2003-01-01
We find that, due to the quantum correlation between the electron and the field, the electronic energy becomes quantized also, manifesting the particle aspect of light in the electron-light interaction. The probability amplitude of finding electron with a given energy is given by a generalized Bessel function, which can be represented as a coherent superposition of contributions from a few electronic quantum trajectories. This concept is illustrated by comparing the spectral density of the electron with the laser assisted recombination spectrum.
Hall, Peter
1992-01-01
The bootstrap is a poor estimator of bias in problems of curve estimation, and so bias must be corrected by other means when the bootstrap is used to construct confidence intervals for a probability density. Bias may either be estimated explicitly, or allowed for by undersmoothing the curve estimator. Which of these two approaches is to be preferred? In the present paper we address this question from the viewpoint of coverage accuracy, assuming a given number of derivatives of the unknown den...
Ossenkopf, Volker; Schneider, Nicola; Federrath, Christoph; Klessen, Ralf S
2016-01-01
Probability distribution functions (PDFs) of column densities are an established tool to characterize the evolutionary state of interstellar clouds. Using simulations, we show to what degree their determination is affected by noise, line-of-sight contamination, field selection, and the incomplete sampling in interferometric measurements. We solve the integrals that describe the convolution of a cloud PDF with contaminating sources and study the impact of missing information on the measured column density PDF. The effect of observational noise can be easily estimated and corrected for if the root mean square (rms) of the noise is known. For $\\sigma_{noise}$ values below 40\\,\\% of the typical cloud column density, $N_{peak}$, this involves almost no degradation of the accuracy of the PDF parameters. For higher noise levels and narrow cloud PDFs the width of the PDF becomes increasingly uncertain. A contamination by turbulent foreground or background clouds can be removed as a constant shield if the PDF of the c...
Lewis, Jesse S; Logan, Kenneth A; Alldredge, Mat W; Bailey, Larissa L; VandeWoude, Sue; Crooks, Kevin R
2015-10-01
Urbanization is a primary driver of landscape conversion, with far-reaching effects on landscape pattern and process, particularly related to the population characteristics of animals. Urbanization can alter animal movement and habitat quality, both of which can influence population abundance and persistence. We evaluated three important population characteristics (population density, site occupancy, and species detection probability) of a medium-sized and a large carnivore across varying levels of urbanization. Specifically, we studied bobcat and puma populations across wildland, exurban development, and wildland-urban interface (WUI) sampling grids to test hypotheses evaluating how urbanization affects wild felid populations and their prey. Exurban development appeared to have a greater impact on felid populations than did habitat adjacent to a major urban area (i.e., WUI); estimates of population density for both bobcats and pumas were lower in areas of exurban development compared to wildland areas, whereas population density was similar between WUI and wildland habitat. Bobcats and pumas were less likely to be detected in habitat as the amount of human disturbance associated with residential development increased at a site, which was potentially related to reduced habitat quality resulting from urbanization. However, occupancy of both felids was similar between grids in both study areas, indicating that this population metric was less sensitive than density. At the scale of the sampling grid, detection probability for bobcats in urbanized habitat was greater than in wildland areas, potentially due to restrictive movement corridors and funneling of animal movements in landscapes influenced by urbanization. Occupancy of important felid prey (cottontail rabbits and mule deer) was similar across levels of urbanization, although elk occupancy was lower in urbanized areas. Our study indicates that the conservation of medium- and large-sized felids associated with
Spectral discrete probability density function of measured wind turbine noise in the far field.
Ashtiani, Payam; Denison, Adelaide
2015-01-01
Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources. PMID:25905097
Spectral discrete probability density function of measured wind turbine noise in the far field
Payam eAshtiani
2015-04-01
Full Text Available Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for 1/3rd Octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low frequency noise sources.
A unified optical damage criterion based on the probability density distribution of detector signals
Somoskoi, T.; Vass, Cs.; Mero, M.; Mingesz, R.; Bozoki, Z.; Osvay, K.
2013-11-01
Various methods and procedures have been developed so far to test laser induced optical damage. The question naturally arises, that what are the respective sensitivities of these diverse methods. To make a suitable comparison, both the processing of the measured primary signal has to be at least similar to the various methods, and one needs to establish a proper damage criterion, which has to be universally applicable for every method. We defined damage criteria based on the probability density distribution of the obtained detector signals. This was determined by the kernel density estimation procedure. We have tested the entire evaluation procedure in four well-known detection techniques: direct observation of the sample by optical microscopy; monitoring of the change in the light scattering power of the target surface and the detection of the generated photoacoustic waves both in the bulk of the sample and in the surrounding air.
Weatherbee, Andrew; Sugita, Mitsuro; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex
2016-06-15
The distribution of backscattered intensities as described by the probability density function (PDF) of tissue-scattered light contains information that may be useful for tissue assessment and diagnosis, including characterization of its pathology. In this Letter, we examine the PDF description of the light scattering statistics in a well characterized tissue-like particulate medium using optical coherence tomography (OCT). It is shown that for low scatterer density, the governing statistics depart considerably from a Gaussian description and follow the K distribution for both OCT amplitude and intensity. The PDF formalism is shown to be independent of the scatterer flow conditions; this is expected from theory, and suggests robustness and motion independence of the OCT amplitude (and OCT intensity) PDF metrics in the context of potential biomedical applications. PMID:27304274
Joint probability density function of the stochastic responses of nonlinear structures
Chen Jianbing; Li Jie
2007-01-01
The joint probability density function (PDF) of different structural responses is a very important topic in the stochastic response analysis of nonlinear structures. In this paper, the probability density evolution method, which is successfully developed to capture the instantaneous PDF of an arbitrary single response of interest, is extended to evaluate the joint PDF of any two responses. A two-dimensional partial differential equation in terms of the joint PDF is established.The strategy of selecting representative points via the number theoretical method and sieved by a hyper-ellipsoid is outlined.A two-dimensional difference scheme is developed. The free vibration of an SDOF system is examined to verify the proposed method, and a frame structure exhibiting hysteresis subjected to stochastic ground motion is investigated. It is pointed out that the correlation of different responses results from the fact that randomness of different responses comes from the same set of basic random parameters involved. In other words, the essence of the probabilistic correlation is a physical correlation.
Han Liwei
2014-07-01
Full Text Available Monitoring data on an earth-rockfill dam constitutes a form of spatial data. Such data include much uncertainty owing to the limitation of measurement information, material parameters, load, geometry size, initial conditions, boundary conditions and the calculation model. So the cloud probability density of the monitoring data must be addressed. In this paper, the cloud theory model was used to address the uncertainty transition between the qualitative concept and the quantitative description. Then an improved algorithm of cloud probability distribution density based on a backward cloud generator was proposed. This was used to effectively convert certain parcels of accurate data into concepts which can be described by proper qualitative linguistic values. Such qualitative description was addressed as cloud numerical characteristics-- {Ex, En, He}, which could represent the characteristics of all cloud drops. The algorithm was then applied to analyze the observation data of a piezometric tube in an earth-rockfill dam. And experiment results proved that the proposed algorithm was feasible, through which, we could reveal the changing regularity of piezometric tube’s water level. And the damage of the seepage in the body was able to be found out.
Annihilating Asymmetric Dark Matter
Bell, Nicole F; Shoemaker, Ian M
2014-01-01
The relic abundance of particle and antiparticle dark matter (DM) need not be vastly different in thermal asymmetric dark matter (ADM) models. By considering the effect of a primordial asymmetry on the thermal Boltzmann evolution of coupled DM and anti-DM, we derive the requisite annihilation cross section. This is used in conjunction with CMB and Fermi-LAT gamma-ray data to impose a limit on the number density of anti-DM particles surviving thermal freeze-out. When the extended gamma-ray emission from the Galactic Center is reanalyzed in a thermal ADM framework, we find that annihilation into $\\tau$ leptons prefer anti-DM number densities 1-4$\\%$ that of DM while the $b$-quark channel prefers 50-100$\\%$.
The main work in the annihilation of positrons at Harwell (UK) has been the application of the technique of technological problems to do with the effects of radiation damage and mechanical phenomena, such as fatigue and creep, on the properties of materials. Three experimental techniques for studying positron annihilation in solids are documented in this article. Nuclear pulse counting methods are being used, also angular correlation and the Doppler method. The irradiation of metals and alloys with fast neutrons at high temperatures in a reactor can cause voids to develop in the material. Defects are also produced by the plastic deformation of metals and alloys. It opens up the possibility of using positron annihilation as a practical non-destructive tool to assess mechanical damage in materials. Harwell has also been able to make measurements on the inside surface of a hole in a metal sample and on variously-shaped notched and cracked test pieces, which means that it is possible to apply the technique to engineering components
Riggs, Peter J.
2013-01-01
Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…
Crowe, D.E.; Longshore, K.M.
2010-01-01
We estimated relative abundance and density of Western Burrowing Owls (Athene cunicularia hypugaea) at two sites in the Mojave Desert (200304). We made modifications to previously established Burrowing Owl survey techniques for use in desert shrublands and evaluated several factors that might influence the detection of owls. We tested the effectiveness of the call-broadcast technique for surveying this species, the efficiency of this technique at early and late breeding stages, and the effectiveness of various numbers of vocalization intervals during broadcasting sessions. Only 1 (3) of 31 initial (new) owl responses was detected during passive-listening sessions. We found that surveying early in the nesting season was more likely to produce new owl detections compared to surveying later in the nesting season. New owls detected during each of the three vocalization intervals (each consisting of 30 sec of vocalizations followed by 30 sec of silence) of our broadcasting session were similar (37, 40, and 23; n 30). We used a combination of detection trials (sighting probability) and double-observer method to estimate the components of detection probability, i.e., availability and perception. Availability for all sites and years, as determined by detection trials, ranged from 46.158.2. Relative abundance, measured as frequency of occurrence and defined as the proportion of surveys with at least one owl, ranged from 19.232.0 for both sites and years. Density at our eastern Mojave Desert site was estimated at 0.09 ?? 0.01 (SE) owl territories/km2 and 0.16 ?? 0.02 (SE) owl territories/km2 during 2003 and 2004, respectively. In our southern Mojave Desert site, density estimates were 0.09 ?? 0.02 (SE) owl territories/km2 and 0.08 ?? 0.02 (SE) owl territories/km 2 during 2004 and 2005, respectively. ?? 2010 The Raptor Research Foundation, Inc.
Multiple-streaming and the Probability Distribution of Density in Redshift Space
Hui, L; Shandarin, S F; Hui, Lam; Kofman, Lev; Shandarin, Sergei F.
1999-01-01
We examine several aspects of redshift distortions by expressing the redshift-space density in terms of the eigenvalues and orientation of the local Lagrangian deformation tensor. We explore the importance of multiple-streaming using the Zel'dovich approximation (ZA), and compute the average number of streams in real and redshift-space. It is found that multiple-streaming can be significant in redshift-space but negligible in real-space, even at moderate values of the linear fluctuation amplitude ($\\sigma < 1$). Moreover, unlike their real-space counter-parts, redshift-space multiple-streams can flow past each other with minimal interactions. Such nonlinear redshift-space effects, which operate even when the real-space density field is quite linear, could suppress the classic compression of redshift-structures predicted by linear theory (Kaiser 1987). We also compute using the ZA the probability distribution function (PDF) of density, as well as $S_3$, in real and redshift-space, and compare it with the PD...
PDE-Foam-A probability density estimation method using self-adapting phase-space binning
Probability density estimation (PDE) is a multi-variate discrimination technique based on sampling signal and background densities defined by event samples from data or Monte-Carlo (MC) simulations in a multi-dimensional phase space. In this paper, we present a modification of the PDE method that uses a self-adapting binning method to divide the multi-dimensional phase space in a finite number of hyper-rectangles (cells). The binning algorithm adjusts the size and position of a predefined number of cells inside the multi-dimensional phase space, minimising the variance of the signal and background densities inside the cells. The implementation of the binning algorithm (PDE-Foam) is based on the MC event-generation package Foam. We present performance results for representative examples (toy models) and discuss the dependence of the obtained results on the choice of parameters. The new PDE-Foam shows improved classification capability for small training samples and reduced classification time compared to the original PDE method based on range searching.
Liodakis, I
2015-01-01
In a companion paper we have constructed a new statistical model for blazar populations, which reproduces the apparent velocity and redshift distributions from the MOJAVE survey while assuming single power law distributions for the Lorentz factors and the unbeamed monochromatic radio luminosity. Treating two separate cases, one for the BL Lac objects (BL Lacs) and one for the Flat Spectrum Radio Quasars (FSRQs), we calculated the distribution of the timescale modulation factor $\\Delta t'/\\Delta t$ which quantifies the change in observed timescales compared to the rest-frame ones due to redshift and relativistic compression. We found that $\\Delta t'/\\Delta t$ follows an exponential distribution with a mean depending on the flux limit of the sample, for both classes. In this work we produce the mathematical formalism that allows us to use this information in order to uncover the underlining rest-frame probability density function (PDF) of observable/measurable timescales of blazar jets, by fitting their observe...
Wu Xinhui; Huang Gaoming; Gao Jun
2013-01-01
In Bayesian multi-target filtering, knowledge of measurement noise variance is very important. Significant mismatches in noise parameters will result in biased estimates. In this paper, a new particle filter for a probability hypothesis density (PHD) filter handling unknown measure-ment noise variances is proposed. The approach is based on marginalizing the unknown parameters out of the posterior distribution by using variational Bayesian (VB) methods. Moreover, the sequential Monte Carlo method is used to approximate the posterior intensity considering non-lin-ear and non-Gaussian conditions. Unlike other particle filters for this challenging class of PHD fil-ters, the proposed method can adaptively learn the unknown and time-varying noise variances while filtering. Simulation results show that the proposed method improves estimation accuracy in terms of both the number of targets and their states.
Sadeh, Iftach; Lahav, Ofer
2015-01-01
We present ANNz2, a new implementation of the public software for photometric redshift (photo-z) estimation of Collister and Lahav (2004). Large photometric galaxy surveys are important for cosmological studies, and in particular for characterizing the nature of dark energy. The success of such surveys greatly depends on the ability to measure photo-zs, based on limited spectral data. ANNz2 utilizes multiple machine learning methods, such as artificial neural networks, boosted decision/regression trees and k-nearest neighbours. The objective of the algorithm is to dynamically optimize the performance of the photo-z estimation, and to properly derive the associated uncertainties. In addition to single-value solutions, the new code also generates full probability density functions (PDFs) in two different ways. In addition, estimators are incorporated to mitigate possible problems of spectroscopic training samples which are not representative or are incomplete. ANNz2 is also adapted to provide optimized solution...
Dehghani, Hossein; Mitra, Aditi
2016-05-01
Results are presented for the occupation probabilities and current densities of bulk and edge states of half-filled graphene in a cylindrical geometry and irradiated by a circularly polarized laser. It is assumed that the system is closed and that the laser has been switched on as a quench. Laser parameters corresponding to some representative topological phases are studied: one where the Chern number of the Floquet bands equals the number of chiral edge modes, a second where anomalous edge states appear in the Floquet Brillouin zone boundaries, and a third where the Chern number is zero, yet topological edge states appear at the center and boundaries of the Floquet Brillouin zone. Qualitative differences are found for the high-frequency off-resonant and low-frequency on-resonant laser with edge states arising due to resonant processes occupied with a high effective temperature on the one hand, while edge states arising due to off-resonant processes occupied with a low effective temperature on the other. For an ideal half-filled system where only one of the bands in the Floquet Brillouin zone is occupied and the other empty, particle-hole and inversion symmetry of the Floquet Hamiltonian implies zero current density. However the laser switch-on protocol breaks the inversion symmetry, resulting in a net cylindrical sheet of current density at steady state. Due to the underlying chirality of the system, this current density profile is associated with a net charge imbalance between the top and bottom of the cylinders.
Full text: he IPCC Fourth Assessment Report (Meehl ef al. 2007) presents multi-model means of the CMIP3 simulations as projections of the global climate change over the 21st century under several SRES emission scenarios. To assess the possible range of change for Australia based on the CMIP3 ensemble, we can follow Whetton etal. (2005) and use the 'pattern scaling' approach, which separates the uncertainty in the global mean warming from that in the local change per degree of warming. This study presents several ways of representing these two factors as probability density functions (PDFs). The beta distribution, a smooth, bounded, function allowing skewness, is found to provide a useful representation of the range of CMIP3 results. A weighting of models based on their skill in simulating seasonal means in the present climate over Australia is included. Dessai ef al. (2005) and others have used Monte-Carlo sampling to recombine such global warming and scaled change factors into values of net change. Here, we use a direct integration of the product across the joint probability space defined by the two PDFs. The result is a cumulative distribution function (CDF) for change, for each variable, location, and season. The median of this distribution provides a best estimate of change, while the 10th and 90th percentiles represent a likely range. The probability of exceeding a specified threshold can also be extracted from the CDF. The presentation focuses on changes in Australian temperature and precipitation at 2070 under the A1B scenario. However, the assumption of linearity behind pattern scaling allows results for different scenarios and times to be simply obtained. In the case of precipitation, which must remain non-negative, a simple modification of the calculations (based on decreases being exponential with warming) is used to avoid unrealistic results. These approaches are currently being used for the new CSIRO/ Bureau of Meteorology climate projections
Osmar Abílio de Carvalho Júnior
2014-04-01
Full Text Available Speckle noise (salt and pepper is inherent to synthetic aperture radar (SAR, which causes a usual noise-like granular aspect and complicates the image classification. In SAR image analysis, the spatial information might be a particular benefit for denoising and mapping classes characterized by a statistical distribution of the pixel intensities from a complex and heterogeneous spectral response. This paper proposes the Probability Density Components Analysis (PDCA, a new alternative that combines filtering and frequency histogram to improve the classification procedure for the single-channel synthetic aperture radar (SAR images. This method was tested on L-band SAR data from the Advanced Land Observation System (ALOS Phased-Array Synthetic-Aperture Radar (PALSAR sensor. The study area is localized in the Brazilian Amazon rainforest, northern Rondônia State (municipality of Candeias do Jamari, containing forest and land use patterns. The proposed algorithm uses a moving window over the image, estimating the probability density curve in different image components. Therefore, a single input image generates an output with multi-components. Initially the multi-components should be treated by noise-reduction methods, such as maximum noise fraction (MNF or noise-adjusted principal components (NAPCs. Both methods enable reducing noise as well as the ordering of multi-component data in terms of the image quality. In this paper, the NAPC applied to multi-components provided large reductions in the noise levels, and the color composites considering the first NAPC enhance the classification of different surface features. In the spectral classification, the Spectral Correlation Mapper and Minimum Distance were used. The results obtained presented as similar to the visual interpretation of optical images from TM-Landsat and Google Maps.
Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation
Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.
2011-05-15
Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.
Emilio Gómez-Lázaro
2016-02-01
Full Text Available The Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF for aggregated wind power generation. PDFs of wind power data are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC and the Bayesian information criterion (BIC. Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.
Development and evaluation of probability density functions for a set of human exposure factors
The purpose of this report is to describe efforts carried out during 1998 and 1999 at the Lawrence Berkeley National Laboratory to assist the U.S. EPA in developing and ranking the robustness of a set of default probability distributions for exposure assessment factors. Among the current needs of the exposure-assessment community is the need to provide data for linking exposure, dose, and health information in ways that improve environmental surveillance, improve predictive models, and enhance risk assessment and risk management (NAS, 1994). The U.S. Environmental Protection Agency (EPA) Office of Emergency and Remedial Response (OERR) plays a lead role in developing national guidance and planning future activities that support the EPA Superfund Program. OERR is in the process of updating its 1989 Risk Assessment Guidance for Superfund (RAGS) as part of the EPA Superfund reform activities. Volume III of RAGS, when completed in 1999 will provide guidance for conducting probabilistic risk assessments. This revised document will contain technical information including probability density functions (PDFs) and methods used to develop and evaluate these PDFs. The PDFs provided in this EPA document are limited to those relating to exposure factors
Development and evaluation of probability density functions for a set of human exposure factors
Maddalena, R.L.; McKone, T.E.; Bodnar, A.; Jacobson, J.
1999-06-01
The purpose of this report is to describe efforts carried out during 1998 and 1999 at the Lawrence Berkeley National Laboratory to assist the U.S. EPA in developing and ranking the robustness of a set of default probability distributions for exposure assessment factors. Among the current needs of the exposure-assessment community is the need to provide data for linking exposure, dose, and health information in ways that improve environmental surveillance, improve predictive models, and enhance risk assessment and risk management (NAS, 1994). The U.S. Environmental Protection Agency (EPA) Office of Emergency and Remedial Response (OERR) plays a lead role in developing national guidance and planning future activities that support the EPA Superfund Program. OERR is in the process of updating its 1989 Risk Assessment Guidance for Superfund (RAGS) as part of the EPA Superfund reform activities. Volume III of RAGS, when completed in 1999 will provide guidance for conducting probabilistic risk assessments. This revised document will contain technical information including probability density functions (PDFs) and methods used to develop and evaluate these PDFs. The PDFs provided in this EPA document are limited to those relating to exposure factors.
An analytical expression is obtained for the probability density function of the multiplication factor of an array of spheres when each sphere is displaced in a random fashion from its initial position. Two cases are considered: (1) spheres in an infinite background medium in which the total cross section in spheres and medium is the same, and (2) spheres in a void. In all cases we use integral transport theory and cast the problem into one involving average fluxes in the spheres which interact via collision probabilities. The statistical aspects of the problem are treated by first order perturbation theory and the general conclusion is that, when the number of spheres exceeds about 5, the reduced multiplication factor ((ξ (k-k0))/(k0)), where k0 is the unperturbed value, is given accurately by the Gaussian distribution P (ξ)= (1)/(SQRT(2 π) σ DT) exp-((ξ 2)/(2 σ2 DT2)).)) The partial standard deviation σ -2δ / SQRT (3), δ being the maximum movement of the sphere from its equilibrium position. DT is a function of the system properties and geometry. Some numerical results are given to illustrate the magnitude of the effects and also the accuracy of diffusion theory for this type of problem is assessed. The overall accuracy of the perturbation method is assessed by an essentially exact result obtained using simulation, thereby enabling the range of perturbation theory to be investigated
Models for the probability densities of the turbulent plasma flux in magnetized plasmas
Bergsaker, A. S.; Fredriksen, Å; Pécseli, H. L.; Trulsen, J. K.
2015-10-01
Observations of turbulent transport in magnetized plasmas indicate that plasma losses can be due to coherent structures or bursts of plasma rather than a classical random walk or diffusion process. A model for synthetic data based on coherent plasma flux events is proposed, where all basic properties can be obtained analytically in terms of a few control parameters. One basic parameter in the present case is the density of burst events in a long time-record, together with parameters in a model of the individual pulse shapes and the statistical distribution of these parameters. The model and its extensions give the probability density of the plasma flux. An interesting property of the model is a prediction of a near-parabolic relation between skewness and kurtosis of the statistical flux distribution for a wide range of parameters. The model is generalized by allowing for an additive random noise component. When this noise dominates the signal we can find a transition to standard results for Gaussian random noise. Applications of the model are illustrated by data from the toroidal Blaamann plasma.
Lu, C.; Liu, Y.; Niu, S.; Vogelmann, A. M.
2012-12-01
In situ aircraft cumulus observations from the RACORO field campaign are used to estimate entrainment rate for individual clouds using a recently developed mixing fraction approach. The entrainment rate is computed based on the observed state of the cloud core and the state of the air that is laterally mixed into the cloud at its edge. The computed entrainment rate decreases when the air is entrained from increasing distance from the cloud core edge; this is because the air farther away from cloud edge is drier than the neighboring air that is within the humid shells around cumulus clouds. Probability density functions of entrainment rate are well fitted by lognormal distributions at different heights above cloud base for different dry air sources (i.e., different source distances from the cloud core edge). Such lognormal distribution functions are appropriate for inclusion into future entrainment rate parameterization in large scale models. To the authors' knowledge, this is the first time that probability density functions of entrainment rate have been obtained in shallow cumulus clouds based on in situ observations. The reason for the wide spread of entrainment rate is that the observed clouds are affected by entrainment mixing processes to different extents, which is verified by the relationships between the entrainment rate and cloud microphysics/dynamics. The entrainment rate is negatively correlated with liquid water content and cloud droplet number concentration due to the dilution and evaporation in entrainment mixing processes. The entrainment rate is positively correlated with relative dispersion (i.e., ratio of standard deviation to mean value) of liquid water content and droplet size distributions, consistent with the theoretical expectation that entrainment mixing processes are responsible for microphysics fluctuations and spectral broadening. The entrainment rate is negatively correlated with vertical velocity and dissipation rate because entrainment
Representation of layer-counted proxy records as probability densities on error-free time axes
Boers, Niklas; Goswami, Bedartha; Ghil, Michael
2016-04-01
Time series derived from paleoclimatic proxy records exhibit substantial dating uncertainties in addition to the measurement errors of the proxy values. For radiometrically dated proxy archives, Goswami et al. [1] have recently introduced a framework rooted in Bayesian statistics that successfully propagates the dating uncertainties from the time axis to the proxy axis. The resulting proxy record consists of a sequence of probability densities over the proxy values, conditioned on prescribed age values. One of the major benefits of this approach is that the proxy record is represented on an accurate, error-free time axis. Such unambiguous dating is crucial, for instance, in comparing different proxy records. This approach, however, is not directly applicable to proxy records with layer-counted chronologies, as for example ice cores, which are typically dated by counting quasi-annually deposited ice layers. Hence the nature of the chronological uncertainty in such records is fundamentally different from that in radiometrically dated ones. Here, we introduce a modification of the Goswami et al. [1] approach that is specifically designed for layer-counted proxy records, instead of radiometrically dated ones. We apply our method to isotope ratios and dust concentrations in the NGRIP core, using a published 60,000-year chronology [2]. It is shown that the further one goes into the past, the more the layer-counting errors accumulate and lead to growing uncertainties in the probability density sequence for the proxy values that results from the proposed approach. For the older parts of the record, these uncertainties affect more and more a statistically sound estimation of proxy values. This difficulty implies that great care has to be exercised when comparing and in particular aligning specific events among different layer-counted proxy records. On the other hand, when attempting to derive stochastic dynamical models from the proxy records, one is only interested in the
无
2003-01-01
The defects and electron densities in Ti50Al50, Ti50Al48Mn2 and Ti50Al48Cu2 alloys have been studied by positron lifetime measurements. The results show that the free electron density in the bulk of binary TiAl alloy is lower than that of pure Ti or Al metal. The open volume of defects on the grain boundaries of binary TiAl alloy is larger than that of a monovacancy of Al metal. The additions of Mn and Cu into Ti-rich TiAl alloy will increase the free electron densities in the bulk and the grain boundary simultaneously, since one Mn atom or Cu atom which occupies the Al atom site provides more free electrons participating metallic bonds than those provided by an Al atom. It is also found the free electron density in the grain boundary of Ti50Al48Cu2 is higher than that of Ti50Al48Mn2 alloy, while the free electron density in the bulk of Ti50Al48Cu2 is lower than that of Ti50Al48Mn2 alloy. The behaviors of Mn and Cu atoms in TiAl alloy have been discussed.
Probability density function and estimation for error of digitized map coordinates in GIS
童小华; 刘大杰
2004-01-01
Traditionally, it is widely accepted that measurement error usually obeys the normal distribution. However, in this paper a new idea is proposed that the error in digitized data which is a major derived data source in GIS does not obey the normal distribution but the p-norm distribution with a determinate parameter. Assuming that the error is random and has the same statistical properties, the probability density function of the normal distribution,Laplace distribution and p-norm distribution are derived based on the arithmetic mean axiom, median axiom and pmedian axiom, which means that the normal distribution is only one of these distributions but not the least one.Based on this idea, distribution fitness tests such as Skewness and Kurtosis coefficient test, Pearson chi-square x2 test and Kolmogorov test for digitized data are conducted. The results show that the error in map digitization obeys the p-norm distribution whose parameter is close to 1.60. A least p-norm estimation and the least square estimation of digitized data are further analyzed, showing that the least p-norm adiustment is better than the least square adjustment for digitized data processing in GIS.
McKean, Cristina; Letts, Carolyn; Howard, David
2014-11-01
The effect of phonotactic probability (PP) and neighbourhood density (ND) on triggering word learning was examined in children with Language Impairment (3;04-6;09) and compared to Typically Developing children. Nonwords, varying PP and ND orthogonally, were presented in a story context and their learning tested using a referent identification task. Group comparisons with receptive vocabulary as a covariate found no group differences in overall scores or in the influence of PP or ND. Therefore, there was no evidence of atypical lexical or phonological processing. 'Convergent' PP/ND (High PP/High ND; Low PP/Low ND) was optimal for word learning in both groups. This bias interacted with vocabulary knowledge. 'Divergent' PP/ND word scores (High PP/Low ND; Low PP/High ND) were positively correlated with vocabulary so the 'divergence disadvantage' reduced as vocabulary knowledge grew; an interaction hypothesized to represent developmental changes in lexical-phonological processing linked to the emergence of phonological representations. PMID:24191951
Representation of Probability Density Functions from Orbit Determination using the Particle Filter
Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell
2012-01-01
Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.
Probability density function modeling of dispersed two-phase turbulent flows
Pozorski, Jacek; Minier, Jean-Pierre
1999-01-01
This paper discusses stochastic approaches to dispersed two-phase flow modeling. A general probability density function (PDF) formalism is used since it provides a common and convenient framework to analyze the relations between different formulations. For two-phase flow PDF modeling, a key issue is the choice of the state variables. In a first formulation, they include only the position and velocity of the dispersed particles. The kinetic equation satisfied by the corresponding PDF is derived in a different way using tools from the theory of stochastic differential equations. The final expression is identical to an earlier proposal by Reeks [Phys. Fluids A 4, 1290 (1992)] obtained with a different method. As the kinetic equation involves the instantaneous fluid velocity sampled along the particle trajectories, it is unclosed. Another, more general, formulation is then presented, where the fluid velocity ``seen'' by the solid particles along their paths is added to the state variables. A diffusion model, where trajectories of the process follow a Langevin type of equation, is proposed for the time evolution equation of the fluid velocity ``seen'' and is discussed. A general PDF formulation that includes both fluid and particle variables, and from which both fluid and particle mean equations can be obtained, is then put forward.
Delarue, B. J.; Pope, S. B.
1998-02-01
A particle method applying the probability density function (PDF) approach to turbulent compressible reacting flows is presented. The method is applied to low and high Mach number reacting plane mixing layers. Good agreement is obtained between the model calculations and the available experimental data. The PDF equation is solved using a Lagrangian Monte Carlo method. To represent the effects of compressibility on the flow, the velocity PDF formulation is extended to include thermodynamic variables such as the pressure and the internal energy. Full closure of the joint PDF transport equation is made possible in this way without coupling to a finite-difference-type solver. The stochastic differential equations (SDE) that model the evolution of Lagrangian particle properties are based on existing models for the effects of compressibility on turbulence. The chemistry studied is the fast hydrogen-fluorine reaction. For the low Mach number runs, low heat release calculations are performed with equivalence ratios different from one. Heat release is then increased to study the effect of chemical reaction on the mixing layer growth rate. The subsonic results are compared with experimental data, and good overall agreement is obtained. The calculations are then performed at a higher Mach number, and the results are compared with the subsonic results. Our purpose in this paper is not to assess the performances of existing models for compressible or reacting flows. It is rather to present a new approach extending the domain of applicability of PDF methods to high-speed combustion.
Power probability density function control and performance assessment of a nuclear research reactor
Highlights: • In this paper, the performance assessment of static PDF control system is discussed. • The reactor PDF model is set up based on the B-spline functions. • Acquaints of Nu, and Th-h. equations solve concurrently by reformed Hansen’s method. • A principle of performance assessment is put forward for the PDF of the NR control. - Abstract: One of the main issues in controlling a system is to keep track of the conditions of the system function. The performance condition of the system should be inspected continuously, to keep the system in reliable working condition. In this study, the nuclear reactor is considered as a complicated system and a principle of performance assessment is used for analyzing the performance of the power probability density function (PDF) of the nuclear research reactor control. First, the model of the power PDF is set up, then the controller is designed to make the power PDF for tracing the given shape, that make the reactor to be a closed-loop system. The operating data of the closed-loop reactor are used to assess the control performance with the performance assessment criteria. The modeling, controller design and the performance assessment of the power PDF are all applied to the control of Tehran Research Reactor (TRR) power in a nuclear process. In this paper, the performance assessment of the static PDF control system is discussed, the efficacy and efficiency of the proposed method are investigated, and finally its reliability is proven
Probability density functions of the average and difference intensities of Friedel opposites.
Shmueli, U; Flack, H D
2010-11-01
Trigonometric series for the average (A) and difference (D) intensities of Friedel opposites were carefully rederived and were normalized to minimize their dependence on sin(theta)/lambda. Probability density functions (hereafter p.d.f.s) of these series were then derived by the Fourier method [Shmueli, Weiss, Kiefer & Wilson (1984). Acta Cryst. A40, 651-660] and their expressions, which admit any chemical composition of the unit-cell contents, were obtained for the space group P1. Histograms of A and D were then calculated for an assumed random-structure model and for 3135 Friedel pairs of a published solved crystal structure, and were compared with the p.d.f.s after the latter were scaled up to the histograms. Good agreement was obtained for the random-structure model and a qualitative one for the published solved structure. The results indicate that the residual discrepancy is mainly due to the presumed statistical independence of the p.d.f.'s characteristic function on the contributions of the interatomic vectors. PMID:20962376
Micro-object motion tracking based on the probability hypothesis density particle tracker.
Shi, Chunmei; Zhao, Lingling; Wang, Junjie; Zhang, Chiping; Su, Xiaohong; Ma, Peijun
2016-04-01
Tracking micro-objects in the noisy microscopy image sequences is important for the analysis of dynamic processes in biological objects. In this paper, an automated tracking framework is proposed to extract the trajectories of micro-objects. This framework uses a probability hypothesis density particle filtering (PF-PHD) tracker to implement a recursive state estimation and trajectories association. In order to increase the efficiency of this approach, an elliptical target model is presented to describe the micro-objects using shape parameters instead of point-like targets which may cause inaccurate tracking. A novel likelihood function, not only covering the spatiotemporal distance but also dealing with geometric shape function based on the Mahalanobis norm, is proposed to improve the accuracy of particle weight in the update process of the PF-PHD tracker. Using this framework, a larger number of tracks are obtained. The experiments are performed on simulated data of microtubule movements and real mouse stem cells. We compare the PF-PHD tracker with the nearest neighbor method and the multiple hypothesis tracking method. Our PF-PHD tracker can simultaneously track hundreds of micro-objects in the microscopy image sequence. PMID:26084407
Homogeneous clusters over India using probability density function of daily rainfall
Kulkarni, Ashwini
2016-04-01
The Indian landmass has been divided into homogeneous clusters by applying the cluster analysis to the probability density function of a century-long time series of daily summer monsoon (June through September) rainfall at 357 grids over India, each of approximately 100 km × 100 km. The analysis gives five clusters over Indian landmass; only cluster 5 happened to be the contiguous region and all other clusters are dispersed away which confirms the erratic behavior of daily rainfall over India. The area averaged seasonal rainfall over cluster 5 has a very strong relationship with Indian summer monsoon rainfall; also, the rainfall variability over this region is modulated by the most important mode of climate system, i.e., El Nino Southern Oscillation (ENSO). This cluster could be considered as the representative of the entire Indian landmass to examine monsoon variability. The two-sample Kolmogorov-Smirnov test supports that the cumulative distribution functions of daily rainfall over cluster 5 and India as a whole do not differ significantly. The clustering algorithm is also applied to two time epochs 1901-1975 and 1976-2010 to examine the possible changes in clusters in a recent warming period. The clusters are drastically different in two time periods. They are more dispersed in recent period implying the more erroneous distribution of daily rainfall in recent period.
Petrakis, Manolis P
2012-01-01
Two-dimensional data often have autocovariance functions with elliptical equipotential contours, a property known as statistical anisotropy. The anisotropy parameters include the tilt of the ellipse (orientation angle) $\\theta$ with respect to the coordinate system and the ratio $R$ of the principal correlation lengths. Sample estimates of anisotropy parameters are needed for defining suitable spatial models and for interpolation of incomplete data. The sampling joint probability density function characterizes the distribution of anisotropy statistics $(\\hat{R}, \\hat{\\theta})$. By means of analytical calculations, we derive an explicit expression for the joint probability density function, which is valid for Gaussian, stationary and differentiable random fields. Based on it, we derive an approximation of the joint probability density function that is independent of the autocovariance function and provides conservative confidence regions for the sample-based estimates $(\\hat{R},\\hat{\\theta})$. We also formulat...
Di Crescenzo, Antonio
1998-01-01
For truncated birth-and-death processes with two absorbing or two reflecting boundaries, necessary and sufficient conditions on the transition rates are given such that the transition probabilities satisfy a suitable spatial symmetry relation. This allows one to obtain simple expressions for first-passage-time densities and for certain avoiding transition probabilities. An application to an M/M/1 queueing system with two finite sequential queueing rooms of equal sizes is finall...
Coclite, A.; Pascazio, G.; De Palma, P.; Cutrone, L.
2016-07-01
Flamelet-Progress-Variable (FPV) combustion models allow the evaluation of all thermochemical quantities in a reacting flow by computing only the mixture fraction Z and a progress variable C. When using such a method to predict turbulent combustion in conjunction with a turbulence model, a probability density function (PDF) is required to evaluate statistical averages (e. g., Favre averages) of chemical quantities. The choice of the PDF is a compromise between computational costs and accuracy level. The aim of this paper is to investigate the influence of the PDF choice and its modeling aspects to predict turbulent combustion. Three different models are considered: the standard one, based on the choice of a β-distribution for Z and a Dirac-distribution for C; a model employing a β-distribution for both Z and C; and the third model obtained using a β-distribution for Z and the statistically most likely distribution (SMLD) for C. The standard model, although widely used, does not take into account the interaction between turbulence and chemical kinetics as well as the dependence of the progress variable not only on its mean but also on its variance. The SMLD approach establishes a systematic framework to incorporate informations from an arbitrary number of moments, thus providing an improvement over conventionally employed presumed PDF closure models. The rational behind the choice of the three PDFs is described in some details and the prediction capability of the corresponding models is tested vs. well-known test cases, namely, the Sandia flames, and H2-air supersonic combustion.
Liang, Shiuan-Ni; Lan, Boon Leong
2015-11-01
The Newtonian and general-relativistic position and velocity probability densities, which are calculated from the same initial Gaussian ensemble of trajectories using the same system parameters, are compared for a low-speed weak-gravity bouncing ball system. The Newtonian approximation to the general-relativistic probability densities does not always break down rapidly if the trajectories in the ensembles are chaotic -- the rapid breakdown occurs only if the initial position and velocity standard deviations are sufficiently small. This result is in contrast to the previously studied single-trajectory case where the Newtonian approximation to a general-relativistic trajectory will always break down rapidly if the two trajectories are chaotic. Similar rapid breakdown of the Newtonian approximation to the general-relativistic probability densities should also occur for other low-speed weak-gravity chaotic systems since it is due to sensitivity to the small difference between the two dynamical theories at low speed and weak gravity. For the bouncing ball system, the breakdown of the Newtonian approximation is transient because the Newtonian and general-relativistic probability densities eventually converge to invariant densities which are close in agreement.
Kinetic and dynamic probability-density-function descriptions of disperse turbulent two-phase flows.
Minier, Jean-Pierre; Profeta, Christophe
2015-11-01
This article analyzes the status of two classical one-particle probability density function (PDF) descriptions of the dynamics of discrete particles dispersed in turbulent flows. The first PDF formulation considers only the process made up by particle position and velocity Z(p)=(x(p),U(p)) and is represented by its PDF p(t; y(p),V(p)) which is the solution of a kinetic PDF equation obtained through a flux closure based on the Furutsu-Novikov theorem. The second PDF formulation includes fluid variables into the particle state vector, for example, the fluid velocity seen by particles Z(p)=(x(p),U(p),U(s)), and, consequently, handles an extended PDF p(t; y(p),V(p),V(s)) which is the solution of a dynamic PDF equation. For high-Reynolds-number fluid flows, a typical formulation of the latter category relies on a Langevin model for the trajectories of the fluid seen or, conversely, on a Fokker-Planck equation for the extended PDF. In the present work, a new derivation of the kinetic PDF equation is worked out and new physical expressions of the dispersion tensors entering the kinetic PDF equation are obtained by starting from the extended PDF and integrating over the fluid seen. This demonstrates that, under the same assumption of a Gaussian colored noise and irrespective of the specific stochastic model chosen for the fluid seen, the kinetic PDF description is the marginal of a dynamic PDF one. However, a detailed analysis reveals that kinetic PDF models of particle dynamics in turbulent flows described by statistical correlations constitute incomplete stand-alone PDF descriptions and, moreover, that present kinetic-PDF equations are mathematically ill posed. This is shown to be the consequence of the non-Markovian characteristic of the stochastic process retained to describe the system and the use of an external colored noise. Furthermore, developments bring out that well-posed PDF descriptions are essentially due to a proper choice of the variables selected to
Kinetic and dynamic probability-density-function descriptions of disperse turbulent two-phase flows
Minier, Jean-Pierre; Profeta, Christophe
2015-11-01
This article analyzes the status of two classical one-particle probability density function (PDF) descriptions of the dynamics of discrete particles dispersed in turbulent flows. The first PDF formulation considers only the process made up by particle position and velocity Zp=(xp,Up) and is represented by its PDF p (t ;yp,Vp) which is the solution of a kinetic PDF equation obtained through a flux closure based on the Furutsu-Novikov theorem. The second PDF formulation includes fluid variables into the particle state vector, for example, the fluid velocity seen by particles Zp=(xp,Up,Us) , and, consequently, handles an extended PDF p (t ;yp,Vp,Vs) which is the solution of a dynamic PDF equation. For high-Reynolds-number fluid flows, a typical formulation of the latter category relies on a Langevin model for the trajectories of the fluid seen or, conversely, on a Fokker-Planck equation for the extended PDF. In the present work, a new derivation of the kinetic PDF equation is worked out and new physical expressions of the dispersion tensors entering the kinetic PDF equation are obtained by starting from the extended PDF and integrating over the fluid seen. This demonstrates that, under the same assumption of a Gaussian colored noise and irrespective of the specific stochastic model chosen for the fluid seen, the kinetic PDF description is the marginal of a dynamic PDF one. However, a detailed analysis reveals that kinetic PDF models of particle dynamics in turbulent flows described by statistical correlations constitute incomplete stand-alone PDF descriptions and, moreover, that present kinetic-PDF equations are mathematically ill posed. This is shown to be the consequence of the non-Markovian characteristic of the stochastic process retained to describe the system and the use of an external colored noise. Furthermore, developments bring out that well-posed PDF descriptions are essentially due to a proper choice of the variables selected to describe physical systems
Rispens, Judith; Baker, Anne; Duinmeijer, Iris
2015-01-01
Purpose: The effects of neighborhood density (ND) and lexical frequency on word recognition and the effects of phonotactic probability (PP) on nonword repetition (NWR) were examined to gain insight into processing at the lexical and sublexical levels in typically developing (TD) children and children with developmental language problems. Method:…
Anderson, D D; Victor Camillo
2003-01-01
Let $ R $ be a commutative ring with 1. We define $ R $ to be an annihilator-semigroup ring if $ R $ has an annihilator-Semigroup $ S $, that is, $ (S, \\cdot) $ is a multiplicative subsemigroup of $ (R, \\cdot) $ with the property that for each $ r \\in R $ there exists a unique $ s \\in S $ with $ 0 : r = 0 : s $. In this paper we investigate annihilator-semigroups and annihilator-semigroup rings.
Papadopoulos, Vissarion; Kalogeris, Ioannis
2016-05-01
The present paper proposes a Galerkin finite element projection scheme for the solution of the partial differential equations (pde's) involved in the probability density evolution method, for the linear and nonlinear static analysis of stochastic systems. According to the principle of preservation of probability, the probability density evolution of a stochastic system is expressed by its corresponding Fokker-Planck (FP) stochastic partial differential equation. Direct integration of the FP equation is feasible only for simple systems with a small number of degrees of freedom, due to analytical and/or numerical intractability. However, rewriting the FP equation conditioned to the random event description, a generalized density evolution equation (GDEE) can be obtained, which can be reduced to a one dimensional pde. Two Galerkin finite element method schemes are proposed for the numerical solution of the resulting pde's, namely a time-marching discontinuous Galerkin scheme and the StreamlineUpwind/Petrov Galerkin (SUPG) scheme. In addition, a reformulation of the classical GDEE is proposed, which implements the principle of probability preservation in space instead of time, making this approach suitable for the stochastic analysis of finite element systems. The advantages of the FE Galerkin methods and in particular the SUPG over finite difference schemes, like the modified Lax-Wendroff, which is the most frequently used method for the solution of the GDEE, are illustrated with numerical examples and explored further.
Criticality of the net-baryon number probability distribution at finite density
Kenji Morita; Bengt Friman; Krzysztof Redlich
2014-01-01
We compute the probability distribution $P(N)$ of the net-baryon number at finite temperature and quark-chemical potential, $\\mu$, at a physical value of the pion mass in the quark-meson model within the functional renormalization group scheme. For $\\mu/T
Criticality of the net-baryon number probability distribution at finite density
Morita, Kenji; Friman, Bengt; Redlich, Krzysztof
2015-01-01
We compute the probability distribution P(N) of the net-baryon number at finite temperature and quark-chemical potential, μ , at a physical value of the pion mass in the quark-meson model within the functional renormalization group scheme. For μ/T
Santos, André Duarte dos
2011-01-01
This thesis examines the stability and accuracy of three different methods to estimate Risk-Neutral Density functions (RNDs) using European options. These methods are the Double-Lognormal Function (DLN), the Smoothed Implied Volatility Smile (SML) and the Density Functional Based on Confluent Hypergeometric function (DFCH). These methodologies were used to obtain the RNDs from the option prices with the underlying USDBRL (price of US dollars in terms of Brazilian reals) for different maturiti...
Protein distance constraints predicted by neural networks and probability density functions
Lund, Ole; Frimand, Kenneth; Gorodkin, Jan; Bohr, Henrik; Bohr, Jakob; Hansen, Jan; Brunak, Søren
1997-01-01
We predict interatomic C-α distances by two independent data driven methods. The first method uses statistically derived probability distributions of the pairwise distance between two amino acids, whilst the latter method consists of a neural network prediction approach equipped with windows taki...... profiles. A threading method based on the predicted distances is presented. A homepage with software, predictions and data related to this paper is available at http://www.cbs.dtu.dk/services/CPHmodels/...
SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.
Angraini, Lily Maysari; Suparmi, Variani, Viska Inda
2010-12-01
SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.
We discuss the relative merits of Anger cameras and Bismuth Germanate mosaic counters for measuring the angular correlation of positron annihilation radiation at a facility such as the proposed Positron Factory at Takasaki. The two possibilities appear equally cost effective at this time. (author)
Mills, A.P. Jr. [Bell Labs. Murray Hill, NJ (United States); West, R.N.; Hyodo, Toshio
1997-03-01
We discuss the relative merits of Anger cameras and Bismuth Germanate mosaic counters for measuring the angular correlation of positron annihilation radiation at a facility such as the proposed Positron Factory at Takasaki. The two possibilities appear equally cost effective at this time. (author)
Sato, Tatsuhiko; Hamada, Nobuyuki
2014-01-01
We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric...
Criticality of the net-baryon number probability distribution at finite density
Morita, Kenji; Redlich, Krzysztof
2014-01-01
We compute the probability distribution $P(N)$ of the net-baryon number at finite temperature and quark-chemical potential, $\\mu$, at a physical value of the pion mass in the quark-meson model within the functional renormalization group scheme. For $\\mu/T<1$, the model exhibits the chiral crossover transition which belongs to the universality class of the $O(4)$ spin system in three dimensions. We explore the influence of the chiral crossover transition on the properties of the net baryon number probability distribution, $P(N)$. By considering ratios of $P(N)$ to the Skellam function, with the same mean and variance, we unravel the characteristic features of the distribution that are related to $O(4)$ criticality at the chiral crossover transition. We explore the corresponding ratios for data obtained at RHIC by the STAR Collaboration and discuss their implications. We also examine $O(4)$ criticality in the context of binomial and negative-binomial distributions for the net proton number.
Criticality of the net-baryon number probability distribution at finite density
Kenji Morita
2015-02-01
Full Text Available We compute the probability distribution P(N of the net-baryon number at finite temperature and quark-chemical potential, μ, at a physical value of the pion mass in the quark-meson model within the functional renormalization group scheme. For μ/T<1, the model exhibits the chiral crossover transition which belongs to the universality class of the O(4 spin system in three dimensions. We explore the influence of the chiral crossover transition on the properties of the net baryon number probability distribution, P(N. By considering ratios of P(N to the Skellam function, with the same mean and variance, we unravel the characteristic features of the distribution that are related to O(4 criticality at the chiral crossover transition. We explore the corresponding ratios for data obtained at RHIC by the STAR Collaboration and discuss their implications. We also examine O(4 criticality in the context of binomial and negative-binomial distributions for the net proton number.
A summary of transition probabilities for atomic absorption lines formed in low-density clouds
Morton, D. C.; Smith, W. H.
1973-01-01
A table of wavelengths, statistical weights, and excitation energies is given for 944 atomic spectral lines in 221 multiplets whose lower energy levels lie below 0.275 eV. Oscillator strengths were adopted for 635 lines in 155 multiplets from the available experimental and theoretical determinations. Radiation damping constants also were derived for most of these lines. This table contains the lines most likely to be observed in absorption in interstellar clouds, circumstellar shells, and the clouds in the direction of quasars where neither the particle density nor the radiation density is high enough to populate the higher levels. All ions of all elements from hydrogen to zinc are included which have resonance lines longward of 912 A, although a number of weaker lines of neutrals and first ions have been omitted.
Calderero Patino, Felipe; Marqués Acosta, Fernando; Ortega, Antonio
2009-01-01
Information theoretical region merging techniques have been shown to provide a state-of-the-art unified solution for natural and texture image segmentation. Here, we study how the segmentation results can be further improved by a more accurate estimation of the statistical model characterizing the regions. Concretely, we explore four density estimators that can be used for pdf or joint pdf estimation. The first three are based on different quantization strategies: a general ...
Probability density functions for the variable solar wind near the solar cycle minimum
Vörös,; Leitner, M; Narita, Y; Consolini, G; Kovács, P; Tóth, A; Lichtenberger, J
2015-01-01
Unconditional and conditional statistics is used for studying the histograms of magnetic field multi-scale fluctuations in the solar wind near the solar cycle minimum in 2008. The unconditional statistics involves the magnetic data during the whole year 2008. The conditional statistics involves the magnetic field time series splitted into concatenated subsets of data according to a threshold in dynamic pressure. The threshold separates fast stream leading edge compressional and trailing edge uncompressional fluctuations. The histograms obtained from these data sets are associated with both large-scale (B) and small-scale ({\\delta}B) magnetic fluctuations, the latter corresponding to time-delayed differences. It is shown here that, by keeping flexibility but avoiding the unnecessary redundancy in modeling, the histograms can be effectively described by a limited set of theoretical probability distribution functions (PDFs), such as the normal, log-normal, kappa and logkappa functions. In a statistical sense the...
Photon correlations in positron annihilation
Gauthier, Isabelle; Hawton, Margaret
2010-01-01
The two-photon positron annihilation density matrix is found to separate into a diagonal center of energy factor implying maximally entangled momenta, and a relative factor describing decay. For unknown positron injection time, the distribution of the difference in photon arrival times is a double exponential at the para-Ps decay rate, consistent with experiment (V. D. Irby, Meas. Sci. Technol. 15, 1799 (2004)).
Chowdhury, Shakhawat
2013-05-01
The evaluation of the status of a municipal drinking water treatment plant (WTP) is important. The evaluation depends on several factors, including, human health risks from disinfection by-products (R), disinfection performance (D), and cost (C) of water production and distribution. The Dempster-Shafer theory (DST) of evidence can combine the individual status with respect to R, D, and C to generate a new indicator, from which the overall status of a WTP can be evaluated. In the DST, the ranges of different factors affecting the overall status are divided into several segments. The basic probability assignments (BPA) for each segment of these factors are provided by multiple experts, which are then combined to obtain the overall status. In assigning the BPA, the experts use their individual judgments, which can impart subjective biases in the overall evaluation. In this research, an approach has been introduced to avoid the assignment of subjective BPA. The factors contributing to the overall status were characterized using the probability density functions (PDF). The cumulative probabilities for different segments of these factors were determined from the cumulative density function, which were then assigned as the BPA for these factors. A case study is presented to demonstrate the application of PDF in DST to evaluate a WTP, leading to the selection of the required level of upgradation for the WTP. PMID:22941202
On the thresholds, probability densities, and critical exponents of Bak-Sneppen-like models
Garcia, Guilherme J. M.; Dickman, Ronald
2004-10-01
We report a simple method to accurately determine the threshold and the exponent ν of the Bak-Sneppen (BS) model and also investigate the BS universality class. For the random-neighbor version of the BS model, we find the threshold x ∗=0.33332(3) , in agreement with the exact result x ∗= {1}/{3} given by mean-field theory. For the one-dimensional original model, we find x ∗=0.6672(2) in good agreement with the results reported in the literature; for the anisotropic BS model we obtain x ∗=0.7240(1) . We study the finite size effect x ∗(L)-x ∗(L→∞)∝L -ν, observed in a system with L sites, and find ν=1.00(1) for the random-neighbor version, ν=1.40(1) for the original model, and ν=1.58(1) for the anisotropic case. Finally, we discuss the effect of defining the extremal site as the one which minimizes a general function f( x), instead of simply f( x)= x as in the original updating rule. We emphasize that models with extremal dynamics have singular stationary probability distributions p( x). Our simulations indicate the existence of two symmetry-based universality classes.
Yu, Zhi-wu; Mao, Jian-feng; Guo, Feng-qi; Guo, Wei
2016-03-01
Rail irregularity is one of the main sources causing train-bridge random vibration. A new random vibration theory for the coupled train-bridge systems is proposed in this paper. First, number theory method (NTM) with 2N-dimensional vectors for the stochastic harmonic function (SHF) of rail irregularity power spectrum density was adopted to determine the representative points of spatial frequencies and phases to generate the random rail irregularity samples, and the non-stationary rail irregularity samples were modulated with the slowly varying function. Second, the probability density evolution method (PDEM) was employed to calculate the random dynamic vibration of the three-dimensional (3D) train-bridge system by a program compiled on the MATLAB® software platform. Eventually, the Newmark-β integration method and double edge difference method of total variation diminishing (TVD) format were adopted to obtain the mean value curve, the standard deviation curve and the time-history probability density information of responses. A case study was presented in which the ICE-3 train travels on a three-span simply-supported high-speed railway bridge with excitation of random rail irregularity. The results showed that compared to the Monte Carlo simulation, the PDEM has higher computational efficiency for the same accuracy, i.e., an improvement by 1-2 orders of magnitude. Additionally, the influences of rail irregularity and train speed on the random vibration of the coupled train-bridge system were discussed.
Numerical values of charged-particle thermonuclear reaction rates for nuclei in the A=14 to 40 region are tabulated. The results are obtained using a method, based on Monte Carlo techniques, that has been described in the preceding paper of this issue (Paper I). We present a low rate, median rate and high rate which correspond to the 0.16, 0.50 and 0.84 quantiles, respectively, of the cumulative reaction rate distribution. The meaning of these quantities is in general different from the commonly reported, but statistically meaningless expressions, 'lower limit', 'nominal value' and 'upper limit' of the total reaction rate. In addition, we approximate the Monte Carlo probability density function of the total reaction rate by a lognormal distribution and tabulate the lognormal parameters μ and σ at each temperature. We also provide a quantitative measure (Anderson-Darling test statistic) for the reliability of the lognormal approximation. The user can implement the approximate lognormal reaction rate probability density functions directly in a stellar model code for studies of stellar energy generation and nucleosynthesis. For each reaction, the Monte Carlo reaction rate probability density functions, together with their lognormal approximations, are displayed graphically for selected temperatures in order to provide a visual impression. Our new reaction rates are appropriate for bare nuclei in the laboratory. The nuclear physics input used to derive our reaction rates is presented in the subsequent paper of this issue (Paper III). In the fourth paper of this issue (Paper IV) we compare our new reaction rates to previous results.
WASP-17b: AN ULTRA-LOW DENSITY PLANET IN A PROBABLE RETROGRADE ORBIT
We report the discovery of the transiting giant planet WASP-17b, the least-dense planet currently known. It is 1.6 Saturn masses, but 1.5-2 Jupiter radii, giving a density of 6%-14% that of Jupiter. WASP-17b is in a 3.7 day orbit around a sub-solar metallicity, V = 11.6, F6 star. Preliminary detection of the Rossiter-McLaughlin effect suggests that WASP-17b is in a retrograde orbit (λ ∼ -1500), indicative of a violent history involving planet-planet or star-planet scattering. WASP-17b's bloated radius could be due to tidal heating resulting from recent or ongoing tidal circularization of an eccentric orbit, such as the highly eccentric orbits that typically result from scattering interactions. It will thus be important to determine more precisely the current orbital eccentricity by further high-precision radial velocity measurements or by timing the secondary eclipse, both to reduce the uncertainty on the planet's radius and to test tidal-heating models. Owing to its low surface gravity, WASP-17b's atmosphere has the largest scale height of any known planet, making it a good target for transmission spectroscopy.
WASP-17b: an ultra-low density planet in a probable retrograde orbit
Anderson, D R; Gillon, M; Triaud, A H M J; Smalley, B; Hebb, L; Cameron, A Collier; Maxted, P F L; Queloz, D; West, R G; Bentley, S J; Enoch, B; Horne, K; Lister, T A; Mayor, M; Parley, N R; Pepe, F; Pollacco, D; Ségransan, D; Udry, S; Wilson, D M
2009-01-01
We report the discovery of the transiting giant planet WASP-17b, the least-dense planet currently known. It is 1.6 Saturn masses but 1.5-2 Jupiter radii, giving a density of 6-14 per cent that of Jupiter. WASP-17b is in a 3.7-day orbit around a sub-solar metallicity, V = 11.6, F6 star. Preliminary detection of the Rossiter-McLaughlin effect suggests that WASP-17b is in a retrograde orbit (lambda ~ -150 deg), indicative of a violent history involving planet-planet or planet-star scattering. WASP-17b's bloated radius could be due to tidal heating resulting from recent or ongoing tidal circularisation of an eccentric orbit, such as the highly eccentric orbits that typically result from scattering interactions. It will thus be important to determine more precisely the current orbital eccentricity by further high-precision radial velocity measurements or by timing the secondary eclipse, both to reduce the uncertainty on the planet's radius and to test tidal-heating models. Owing to its low surface gravity, WASP-17...
Lopes Cardozo, David; Holdsworth, Peter C. W.
2016-04-01
The magnetization probability density in d = 2 and 3 dimensional Ising models in slab geometry of volume L\\paralleld-1× {{L}\\bot} is computed through Monte-Carlo simulation at the critical temperature and zero magnetic field. The finite-size scaling of this distribution and its dependence on the system aspect-ratio ρ =\\frac{{{L}\\bot}}{{{L}\\parallel}} and boundary conditions are discussed. In the limiting case ρ \\to 0 of a macroscopically large slab ({{L}\\parallel}\\gg {{L}\\bot} ) the distribution is found to scale as a Gaussian function for all tested system sizes and boundary conditions.
Nemeth, Noel
2013-01-01
Models that predict the failure probability of monolithic glass and ceramic components under multiaxial loading have been developed by authors such as Batdorf, Evans, and Matsuo. These "unit-sphere" failure models assume that the strength-controlling flaws are randomly oriented, noninteracting planar microcracks of specified geometry but of variable size. This report develops a formulation to describe the probability density distribution of the orientation of critical strength-controlling flaws that results from an applied load. This distribution is a function of the multiaxial stress state, the shear sensitivity of the flaws, the Weibull modulus, and the strength anisotropy. Examples are provided showing the predicted response on the unit sphere for various stress states for isotropic and transversely isotropic (anisotropic) materials--including the most probable orientation of critical flaws for offset uniaxial loads with strength anisotropy. The author anticipates that this information could be used to determine anisotropic stiffness degradation or anisotropic damage evolution for individual brittle (or quasi-brittle) composite material constituents within finite element or micromechanics-based software
Annihilation of Quantum Magnetic Fluxes
Gonzalez, W. D.
After introducing the concepts associated with the Aharonov and Bohm effect and with the existence of a quantum of magnetic flux (QMF), we briefly discuss the Ginzburg-Landau theory that explains its origin and fundamental consequences. Also relevant observations of QMFs obtained in the laboratory using superconducting systems (vortices) are mentioned. Next, we describe processes related with the interaction of QMFs with opposite directions in terms of the gauge field geometry related to the vector potential. Then, we discuss the use of a Lagrangian density for a scalar field theory involving radiation in order to describe the annihilation of QMFs, claimed to be responsible for the emission of photons with energies corresponding to that of the annihilated magnetic fields. Finally, a possible application of these concepts to the observed variable dynamics of neutron stars is briefly mentioned.
Dongfang, Wang; Baojun, Pang; Weike, Xiao; Keke, Peng
2016-01-01
The geostationary (GEO) ring is a valuable orbital region contaminated with an alarming number of space debris. Due to its particular orbital characters, the GEO objects spatial distribution is very susceptible to local longitude regions. Therefore the local longitude distribution of these objects in the Earth-centered Earth-fixed (ECEF) coordinate system is much more stable and useful in practical applications than it is in the J2000 inertial coordinate system. In previous studies of space debris environment models, the spatial density is calculated in the J2000 coordinate system, which makes it impossible to identify the spatial distribution in different local longitude regions. For GEO objects, this may bring potent inaccuracy. In order to describe the GEO objects spatial distribution in different local longitude regions, this paper introduced a new method which can provide the spatial density distribution in the ECEF coordinate system. Based on 2014/12/10 two line element (TLE) data provided by the US Space Surveillance Network, the spatial density of cataloged GEO objects are given in the ECEF coordinate system. Combined with the previous studies of "Cube" collision probability evaluation, the GEO region collision probability in the ECEF coordinate system is also given here. The examination reveals that GEO space debris distribution is not uniform by longitude; it is relatively centered about the geopotential wells. The method given in this paper is also suitable for smaller debris in the GEO region. Currently the longitudinal-dependent analysis is not represented in GEO debris models such as ORDEM or MASTER. Based our method the further version of space debris environment engineering model (SDEEM) developed by China will present a longitudinal independent GEO space debris environment description in the ECEF coordinate system.
Dey, Santanu
2012-01-01
We consider the problem of efficient simulation estimation of the density function at the tails, and the probability of large deviations for a sum of independent, identically distributed, light-tailed and non-lattice random vectors. The latter problem besides being of independent interest, also forms a building block for more complex rare event problems that arise, for instance, in queuing and financial credit risk modeling. It has been extensively studied in literature where state independent exponential twisting based importance sampling has been shown to be asymptotically efficient and a more nuanced state dependent exponential twisting has been shown to have a stronger bounded relative error property. We exploit the saddle-point based representations that exist for these rare quantities, which rely on inverting the characteristic functions of the underlying random vectors. We note that these representations reduce the rare event estimation problem to evaluating certain integrals, which may via importance ...
Fangyuan Nan
2007-01-01
Full Text Available Recently an important and interesting nonlinear generalized likelihood ratio (GLR detector emerged in functional magnetic resonance imaging (fMRI data processing. However, the study of that detector is incomplete: the probability density function (pdf of the test statistic was draw from numerical simulations without much theoretical support and is therefore, not firmly grounded. This correspondence presents more accurate (asymptotic closed form of the pdf by resorting to a non-central Wishart matrix and by asymptotic expansion of some integrals. It is then confirmed theoretically that the detector does possess constant false alarm rate (CFAR property under some practical regimes of signal to noise ratio (SNR for finite samples and the correct threshold selection method is given, which is very important for real fMRI data processing.
Kitayabu, Toru; Hagiwara, Mao; Ishikawa, Hiroyasu; Shirai, Hiroshi
A novel delta-sigma modulator that employs a non-uniform quantizer whose spacing is adjusted by reference to the statistical properties of the input signal is proposed. The proposed delta-sigma modulator has less quantization noise compared to the one that uses a uniform quantizer with the same number of output values. With respect to the quantizer on its own, Lloyd proposed a non-uniform quantizer that is best for minimizing the average quantization noise power. The applicable condition of the method is that the statistical properties of the input signal, the probability density, are given. However, the procedure cannot be directly applied to the quantizer in the delta-sigma modulator because it jeopardizes the modulator's stability. In this paper, a procedure is proposed that determine the spacing of the quantizer with avoiding instability. Simulation results show that the proposed method reduces quantization noise by up to 3.8dB and 2.8dB with the input signal having a PAPR of 16dB and 12dB, respectively, compared to the one employing a uniform quantizer. Two alternative types of probability density function (PDF) are used in the proposed method for the calculation of the output values. One is the PDF of the input signal to the delta-sigma modulator and the other is an approximated PDF of the input signal to the quantizer inside the delta-sigma modulator. Both approaches are evaluated to find that the latter gives lower quantization noise.
Precipitation Study in Inconel 625 Alloy by Positron Annihilation Spectroscopy
M.Ahmad; W. Ahmad; M.A.Shaikh; Mahmud Ahmad; M.U. Rajput
2003-01-01
Precipitation in Inconel 625 alloy has been studied by positron annihilation spectroscopy and electron microscopy. The observeddependence of annihilation characteristics on aging time is attributed to the change of the positron state due to the increaseand decrease of the density and size of the γ″ precipitates. Hardness measurements and lifetime measurements are in goodagreement.
Production bias and cluster annihilation: Why necessary?
Singh, B.N.; Trinkaus, H.; Woo, C.H.
1994-01-01
the primary cluster density is high. Therefore, a sustained high swelling rate driven by production bias must involve the annihilation of primary clusters at sinks. A number of experimental observations which are unexplainable in terms of the conventional dislocation bias for monointerstitials is...
Contribution from S and P waves in pp annihilation at rest
Bendiscioli, G; Fontana, A; Montagna, P; Rotondi, A; Salvini, P; Bertin, A; Bruschi, M; Capponi, M; De Castro, S; Donà, R; Galli, D; Giacobbe, B; Marconi, U; Massa, I; Piccinini, M; Cesari, N S; Spighi, R; Vecchi, S; Vagnoni, V M; Villa, M; Vitale, A; Zoccoli, A; Bianconi, A; Bonomi, G; Lodi-Rizzini, E; Venturelli, L; Zenoni, A; Cicalò, C; De Falco, A; Masoni, A; Puddu, G; Serci, S; Usai, G L; Gorchakov, O E; Prakhov, S N; Rozhdestvensky, A M; Tretyak, V I; Poli, M; Gianotti, P; Guaraldo, C; Lanaro, A; Lucherini, V; Petrascu, C; Kudryavtsev, A E; Balestra, F; Bussa, M P; Busso, L; Cerello, P G; Denisov, O Yu; Ferrero, L; Grasso, A; Maggiora, A; Panzarasa, A; Panzieri, D; Tosello, F; Botta, E; Bressani, Tullio; Calvo, D; Costa, S; D'Isep, D; Feliciello, A; Filippi, A; Marcello, S; Mirfakhraee, N; Agnello, M; Iazzi, F; Minetti, B; Tessaro, S
2001-01-01
The annihilation frequencies of 19 pp annihilation reactions at rest obtained in different target densities are analysed in order to determine the values of the P-wave annihilation percentage at each target density and the average hadronic branching ratios from P- and S-states. Both the assumptions of linear dependence of the annihilation frequencies on the P-wave annihilation percentage of the protonium state and the approach with the enhancement factors of Batty (1989) are considered. Furthermore the cases of incompatible measurements are discussed. (55 refs).
Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo
2016-07-01
Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.
Tanaka, Taku; Ciffroy, Philippe; Stenberg, Kristofer; Capri, Ettore
2010-11-01
In the framework of environmental multimedia modeling studies dedicated to environmental and health risk assessments of chemicals, the bioconcentration factor (BCF) is a parameter commonly used, especially for fish. As for neutral lipophilic substances, it is assumed that BCF is independent of exposure levels of the substances. However, for metals some studies found the inverse relationship between BCF values and aquatic exposure concentrations for various aquatic species and metals, and also high variability in BCF data. To deal with the factors determining BCF for metals, we conducted regression analyses to evaluate the inverse relationships and introduce the concept of probability density function (PDF) for Cd, Cu, Zn, Pb, and As. In the present study, for building the regression model and derive the PDF of fish BCF, two statistical approaches are applied: ordinary regression analysis to estimate a regression model that does not consider the variation in data across different fish family groups; and hierarchical Bayesian regression analysis to estimate fish group-specific regression models. The results show that the BCF ranges and PDFs estimated for metals by both statistical approaches have less uncertainty than the variation of collected BCF data (the uncertainty is reduced by 9%-61%), and thus such PDFs proved to be useful to obtain accurate model predictions for environmental and health risk assessment concerning metals. PMID:20886641
Caliandro, G.A.; Torres, D.F.; Rea, N., E-mail: andrea.caliandro@ieec.uab.es, E-mail: dtorres@aliga.ieec.uab.es, E-mail: rea@ieec.uab.es [Institute of Space Sciences (IEEC-CSIC), Campus UAB, Fac. de Ciències, Torre C5, parell, 2a planta 08193 Barcelona (Spain)
2013-07-01
Here, we present a new method to evaluate the expectation value of the power spectrum of a time series. A statistical approach is adopted to define the method. After its demonstration, it is validated showing that it leads to the known properties of the power spectrum when the time series contains a periodic signal. The approach is also validated in general with numerical simulations. The method puts into evidence the importance that is played by the probability density function of the phases associated to each time stamp for a given frequency, and how this distribution can be perturbed by the uncertainties of the parameters in the pulsar ephemeris. We applied this method to solve the power spectrum in the case the first derivative of the pulsar frequency is unknown and not negligible. We also undertook the study of the most general case of a blind search, in which both the frequency and its first derivative are uncertain. We found the analytical solutions of the above cases invoking the sum of Fresnel's integrals squared.
Here, we present a new method to evaluate the expectation value of the power spectrum of a time series. A statistical approach is adopted to define the method. After its demonstration, it is validated showing that it leads to the known properties of the power spectrum when the time series contains a periodic signal. The approach is also validated in general with numerical simulations. The method puts into evidence the importance that is played by the probability density function of the phases associated to each time stamp for a given frequency, and how this distribution can be perturbed by the uncertainties of the parameters in the pulsar ephemeris. We applied this method to solve the power spectrum in the case the first derivative of the pulsar frequency is unknown and not negligible. We also undertook the study of the most general case of a blind search, in which both the frequency and its first derivative are uncertain. We found the analytical solutions of the above cases invoking the sum of Fresnel's integrals squared
Afsar, Ozgur; Tirnakli, Ugur
2010-10-01
We investigate the probability density of rescaled sum of iterates of sine-circle map within quasiperiodic route to chaos. When the dynamical system is strongly mixing (i.e., ergodic), standard central limit theorem (CLT) is expected to be valid, but at the edge of chaos where iterates have strong correlations, the standard CLT is not necessarily valid anymore. We discuss here the main characteristics of the probability densities for the sums of iterates of deterministic dynamical systems which exhibit quasiperiodic route to chaos. At the golden-mean onset of chaos for the sine-circle map, we numerically verify that the probability density appears to converge to a q -Gaussian with q<1 as the golden mean value is approached.
邱翔; 黄永祥; 周全; 孙超
2014-01-01
In this paper, we apply a scaling analysis of the maximum of the probability density function (pdf) of velocity increments, i.e., pmax ( )=max utp ( u ) att t-D D : , for a velocity field of turbulent Rayleigh-Bénard convection obtained at the Taylor-microscale Reynolds number Rel»60 . The scaling exponent a is comparable with that of the first-order velocity structure function, z(1) , in which the large-scale effect might be constrained, showing the background fluctuations of the velocity field. It is found that the integral time T(x/D) scales as T(x/D):(x/D)-b, with a scaling exponent b=0.25±0.01, suggesting the large-scale inhomo-geneity of the flow. Moreover, the pdf scaling exponent a(x, z) is strongly inhomogeneous in the x (horizontal) direction. The vertical-direction-averaged pdf scaling exponent a%( x) obeys a logarithm law with respect to x , the distance from the cell sidewall, with a scaling exponent x»0.22 within the velocity boundary layer and x»0.28 near the cell sidewall. In the cell's central region, a(x, z) fluctuates around 0.37, which agrees well with z(1) obtained in high-Reynolds-number turbulent flows, implying the same intermittent correction. Moreover, the length of the inertial range represented in decade T%I (x) is found to be linearly increasing with the wall distance x with an exponent 0.65±0.05 .
Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners—the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [11C]AFM rats imaged on the HRRT and [11C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods. (paper)
Antineutron-nucleus annihilation
Botta, E
2001-01-01
The n-nucleus annihilation process has been studied by the OBELIX experiment at the CERN Low Energy Antiproton Ring (LEAR) in the (50-400) MeV/c projectile momentum range on C, Al, Cu, Ag, Sn, and Pb nuclear targets. A systematic survey of the annihilation cross- section, sigma /sub alpha /(A, p/sub n/), has been performed, obtaining information on its dependence on the target mass number and on the incoming n momentum. For the first time the mass number dependence of the (inclusive) final state composition of the process has been analyzed. Production of the rho vector meson has also been examined. (13 refs).
SUSY dark matter annihilation in the Galactic halo
Berezinsky, Veniamin; Erohenko, Yury
2015-01-01
Neutralino annihilation in the Galactic halo is the most definite observational signature proposed for indirect registration of the SUSY Dark Matter (DM) candidate particles. The corresponding annihilation signal (in the form of gamma-rays, positrons and antiprotons) may be boosted for one or three orders of magnitude due to the clustering of cold DM particles into the small-scale and very dense self-gravitating clumps. We discuss the formation of these clumps from the initial density perturbations and their successive fate in the Galactic halo. Only a small fraction of these clumps, $\\sim0.1$%, in each logarithmic mass interval $\\Delta\\log M\\sim1$ survives the stage of hierarchical clustering. We calculate the probability of surviving the remnants of dark matter clumps in the Galaxy by modelling the tidal destruction of the small-scale clumps by the Galactic disk and stars. It is demonstrated that a substantial fraction of clump remnants may survive through the tidal destruction during the lifetime of the Ga...
Positron annihilation microprobe
Canter, K.F. [Brandeis Univ., Waltham, MA (United States)
1997-03-01
Advances in positron annihilation microprobe development are reviewed. The present resolution achievable is 3 {mu}m. The ultimate resolution is expected to be 0.1 {mu}m which will enable the positron microprobe to be a valuable tool in the development of 0.1 {mu}m scale electronic devices in the future. (author)
Andersen, Allan Bødskov; Wagener, Tom
2002-01-01
Following Shimko (1993), a large amount of research has evolved around the problem of extracting risk neutral densities from options prices by interpolating the Balck-Scholes implied volatility smile. Some of the methods recently proposed use variants of the cubic spline. Thesee methods have the property of producing non-differentiable probability densities. We argue that this is an undesirable feature and suggest circumventing the problem by fitting a smoothing spline of higher order polynom...
Positron annihilation and speed of sound in the systems containing beta cyclodextrin
Positron annihilation measurements were performed in aqueous solutions of beta-cyclodextrin, as well as in solid mixtures of this sugar with a long-chained alcohol, n-nonanol. Additionally, acoustic (sound speed, density and compressibility) experiments were done in aqueous beta-cyclodextrin and tert-butanol systems and in a three-component water-beta-cyclodextrin-tert-butanol system. The results show that in aqueous solution cyclodextrin does not form inclusive complexes with alcohol, while solid sugar-alcohol mixtures undergo slow changes in time, most probably caused by exchange of guest between interior and exterior of the host molecule. (authors)
Gray, Shelley; Pittman, Andrea; Weinhold, Juliet
2014-01-01
Purpose: In this study, the authors assessed the effects of phonotactic probability and neighborhood density on word-learning configuration by preschoolers with specific language impairment (SLI) and typical language development (TD). Method: One hundred thirty-one children participated: 48 with SLI, 44 with TD matched on age and gender, and 39…
Sommerfeld enhancement of invisible dark matter annihilation in galaxies and galaxy clusters
Chan, Man Ho
2016-01-01
Recent observations indicate that core-like dark matter structures exist in many galaxies, while numerical simulations reveal a singular dark matter density profile at the center. In this article, I show that if the annihilation of dark matter particles gives invisible sterile neutrinos, the Sommerfeld enhancement of the annihilation cross-section can give a sufficiently large annihilation rate to solve the core-cusp problem. The resultant core density, core radius, and their scaling relation...
Black Hole Window into p-Wave Dark Matter Annihilation.
Shelton, Jessie; Shapiro, Stuart L; Fields, Brian D
2015-12-01
We present a new method to measure or constrain p-wave-suppressed cross sections for dark matter (DM) annihilations inside the steep density spikes induced by supermassive black holes. We demonstrate that the high DM densities, together with the increased velocity dispersion, within such spikes combine to make thermal p-wave annihilation cross sections potentially visible in γ-ray observations of the Galactic center (GC). The resulting DM signal is a bright central point source with emission originating from DM annihilations in the absence of a detectable spatially extended signal from the halo. We define two simple reference theories of DM with a thermal p-wave annihilation cross section and establish new limits on the combined particle and astrophysical parameter space of these models, demonstrating that Fermi Large Area Telescope is currently sensitive to thermal p-wave DM over a wide range of possible scenarios for the DM distribution in the GC. PMID:26684108
Bubble chamber: antiproton annihilation
1971-01-01
These images show real particle tracks from the annihilation of an antiproton in the 80 cm Saclay liquid hydrogen bubble chamber. A negative kaon and a neutral kaon are produced in this process, as well as a positive pion. The invention of bubble chambers in 1952 revolutionized the field of particle physics, allowing real tracks left by particles to be seen and photographed by expanding liquid that had been heated to boiling point.
PSA analysis should be based on the best available data for the types of equipment and systems in the plant. In some cases very limited data may be available for evolutionary designs or new equipments, especially in the case of passive systems. It has been recognized that difficulties arise in addressing the uncertainties related to the physical phenomena and characterizing the parameters relevant to the passive system performance evaluation, since the unavailability of a consistent operational and experimental data base. This lack of experimental evidence and validated data forces the analyst to resort to expert/engineering judgment to a large extent, thus making the results strongly dependent upon the expert elicitation process. This prompts the need for the development of a framework for constructing a database to generate probability distributions for the parameters influencing the system behaviour. The objective of the task is to develop a consistent framework aimed at creating probability distributions for the parameters relevant to the passive system performance evaluation. In order to achieve this goal considerable experience and engineering judgement are also required to determine which existing data are most applicable to the new systems or which generic data bases or models provide the best information for the system design. Eventually in case of absence of documented specific reliability data, documented expert judgement coming out from a well structured procedure could be used to envisage sound probability distributions for the parameters under interest
Rice, S. O.
1982-01-01
L. A. Shepp [1] has studied the distribution of the integral of the absolute value of the pinned Wiener process, and has expressed the moment generating function in terms of a Laplace transform. Here we apply Shepp's results to obtain an integral for the density of the distribution. This integral is then evaluated by numerical integration along a path in the complex plane.
Semi-Annihilating Wino-Like Dark Matter
Spray, Andrew P
2015-01-01
Semi-annihilation is a generic feature of dark matter theories with symmetries larger than Z_2. We explore a model based on a Z_4-symmetric dark sector comprised of a scalar singlet and a "wino"-like fermion SU(2)_L triplet. This is the minimal example of semi-annihilation with a gauge-charged fermion. We study the interplay of the Sommerfeld effect in both annihilation and semi-annihilation channels. The modifications to the relic density allow otherwise-forbidden regions of parameter space and can substantially weaken indirect detection constraints. We perform a parameter scan and find that the entire region where the model comprises all the observed dark matter is accessible to current and planned direct and indirect searches.
Dark Matter Annihilation in the First Galaxy Halos
Schon, Sarah; Avram, Cassandra A; Wyithe, J Stuart B; Barberio, Elisabetta
2014-01-01
We investigate the impact of energy released from self-annihilating dark matter on heating of gas in the small, high-redshift dark matter halos thought to host the first stars. A SUSY neutralino like particle is implemented as our dark matter candidate. The PYTHIA code is used to model the final, stable particle distributions produced during the annihilation process. We use an analytic treatment in conjunction with the code MEDEA2 to find the energy transfer and subsequent partition into heating, ionizing and Lyman alpha photon components. We consider a number of halo density models, dark matter particle masses and annihilation channels. We find that the injected energy from dark matter exceeds the binding energy of the gas within a $10^5$ - $10^6$ M$_\\odot$ halo at redshifts above 20, preventing star formation in early halos in which primordial gas would otherwise cool. Thus we find that DM annihilation could delay the formation of the first galaxies.
Dark matter annihilation in the first galaxy haloes
Schön, S.; Mack, K. J.; Avram, C. A.; Wyithe, J. S. B.; Barberio, E.
2015-08-01
We investigate the impact of energy released from self-annihilating dark matter (DM) on heating of gas in the small, high-redshift DM haloes thought to host the first stars. A supersymmetric (SUSY)-neutralino-like particle is implemented as our DM candidate. The PYTHIA code is used to model the final, stable particle distributions produced during the annihilation process. We use an analytic treatment in conjunction with the code MEDEA2 to find the energy transfer and subsequent partition into heating, ionizing and Lyman α photon components. We consider a number of halo density models, DM particle masses and annihilation channels. We find that the injected energy from DM exceeds the binding energy of the gas within a 105-106 M⊙ halo at redshifts above 20, preventing star formation in early haloes in which primordial gas would otherwise cool. Thus we find that DM annihilation could delay the formation of the first galaxies.
Some further ideas on the systematic variation of the positron annihilation parameters in metals
A new systematic correlation was found between some positron annihilation parameters and the electron density of the elements. An estimation of the S-electron density in transition metals has been made. (author)
Dark matter annihilation near a black hole: Plateau versus weak cusp
Dark matter annihilation in so-called spikes near black holes is believed to be an important method of indirect dark matter detection. In the case of circular particle orbits, the density profile of dark matter has a plateau at small radii, the maximal density being limited by the annihilation cross section. However, in the general case of arbitrary velocity anisotropy the situation is different. Particularly, for isotropic velocity distribution the density profile cannot be shallower than r-1/2 in the very center. Indeed, a detailed study reveals that in many cases the term ''annihilation plateau'' is misleading, as the density actually continues to rise towards small radii and forms a weak cusp, ρ∝r-(β+1/2), where β is the anisotropy coefficient. The annihilation flux, however, does not change much in the latter case, if averaged over an area larger than the annihilation radius
Neutrino annihilation in hot plasma
We consider neutrino annihilation in a heat bath, including annihilation via the photon. We show that the annihilation cross section has high and narrow peaks corresponding to a plasmon resonance. This yields an enormous enhancement factor of O(108) in the differential cross section as compared with the purely weak contribution. We also evaluate numerically the thermally averaged neutrino annihilation rate per particle in the heat bath of the early universe to be +e-)>≅2.93GF2T2. We have accounted for the final-state blocking factors as well as for the fact that the center-of-mass frame of collisions is not necessarily the rest frame of the heat bath. Despite the resonances, electromagnetic processes represent only a minor effect in the averaged annihilation rate. (orig.)
Neutrino annihilation in hot plasma
We consider neutrino annihilation in a heat bath, including annihilation via the photon. We show that the annihilation cross section has high and narrow peaks corresponding to a plasmon resonance. This yields an enormous enhancement factor of O(108) in the differential cross section as compared with the purely weak contribution. We also evaluate numerically the thermally averaged neutrino annihilation rate per particle in the heat bath of the early Universe to be +e-)> ≅ 2.93GF2T2. We have accounted for the final state blocking factors as well as for the fact that the center-of-mass frame of collisions is not necessarily the rest frame of the heat bath. Despite the resonances, electromagnetic processes represent only a minor effect in the averaged annihilation rate. (orig.)
Pokhrel, R.; Gutermuth, R.; Ali, B.; Megeath, T.; Pipher, J.; Myers, P.; Fischer, W. J.; Henning, T.; Wolk, S. J.; Allen, L.; Tobin, J. J.
2016-06-01
We present a far-IR survey of the entire Mon R2 GMC with Herschel - SPIRE cross-calibrated with Planck - HFI data. We fit the SEDs of each pixel with a greybody function and an optimal beta value of 1.8. We find that mid-range column densities obtained from far-IR dust emission and near-IR extinction are consistent. For the entire GMC, we find that the column density histogram, or N-PDF, is lognormal below ˜ 1021 cm-2. Above this value, the distribution takes a power law form with an index of -2.15. We analyze the gas geometry, N-PDF shape, and YSO content of a selection of subregions in the cloud. We find no regions with pure lognormal N-PDFs. The regions with a combination of lognormal and one power law N-PDF have a YSO cluster and a corresponding centrally concentrated gas clump. The regions with a combination of lognormal and two power law N-PDF have significant numbers of typically younger YSOs but no prominent YSO cluster. These regions are composed of an aggregate of closely spaced gas filaments with no concentrated dense gas clump. We find that for our fixed scale regions, the YSO count roughly correlates with the N-PDF power law index. The correlation appears steeper for single power law regions relative to two power law regions with a high column density cut-off, as a greater dense gas mass fraction is achieved in the former. A stronger correlation is found between embedded YSO count and the dense gas mass among our regions.
Pokhrel, R; Ali, B; Megeath, T; Pipher, J; Myers, P; Fischer, W J; Henning, T; Wolk, S J; Allen, L; Tobin, J J
2016-01-01
We present a far-IR survey of the entire Mon R2 GMC with $Herschel-SPIRE$ cross-calibrated with $Planck-HFI$ data. We fit the SEDs of each pixel with a greybody function and an optimal beta value of 1.8. We find that mid-range column densities obtained from far-IR dust emission and near-IR extinction are consistent. For the entire GMC, we find that the column density histogram, or N-PDF, is lognormal below $\\sim$10$^{21}$ cm$^{-2}$. Above this value, the distribution takes a power law form with an index of -2.16. We analyze the gas geometry, N-PDF shape, and YSO content of a selection of subregions in the cloud. We find no regions with pure lognormal N-PDFs. The regions with a combination of lognormal and one power law N-PDF have a YSO cluster and a corresponding centrally concentrated gas clump. The regions with a combination of lognormal and two power law N-PDF have significant numbers of typically younger YSOs but no prominent YSO cluster. These regions are composed of an aggregate of closely spaced gas fi...
Van Hooydonk, G
2011-01-01
We review harmonic oscillator theory for closed, stable quantum systems. The H2 potential energy curve (PEC) of Mexican hat-type, calculated with a confined Kratzer oscillator, is better than the Rydberg-Klein-Rees (RKR) H2 PEC. Compared with QM, the theory of chemical bonding is simplified, since a confined Kratzer oscillator gives the long sought for universal function, once called the Holy Grail of Molecular Spectroscopy. This is validated with HF, I2, N2 and O2 PECs. We quantify the entanglement of spatially separated H2 quantum states, which gives a braid view. The equal probability for H2, originating either from HA+HB or HB+HA, is quantified with a Gauss probability density function. At the Bohr scale, confined harmonic oscillators behave properly at all extremes of bound two-nucleon quantum systems and are likely to be useful also at the nuclear scale.
Carta, Jose A. [Department of Mechanical Engineering, University of Las Palmas de Gran Canaria, Campus de Tafira s/n, 35017 Las Palmas de Gran Canaria, Canary Islands (Spain); Ramirez, Penelope; Velazquez, Sergio [Department of Renewable Energies, Technological Institute of the Canary Islands, Pozo Izquierdo Beach s/n, 35119 Santa Lucia, Gran Canaria, Canary Islands (Spain)
2008-10-15
Static methods which are based on statistical techniques to estimate the mean power output of a WECS (wind energy conversion system) have been widely employed in the scientific literature related to wind energy. In the static method which we use in this paper, for a given wind regime probability distribution function and a known WECS power curve, the mean power output of a WECS is obtained by resolving the integral, usually using numerical evaluation techniques, of the product of these two functions. In this paper an analysis is made of the influence of the level of fit between an empirical probability density function of a sample of wind speeds and the probability density function of the adjusted theoretical model on the relative error {epsilon} made in the estimation of the mean annual power output of a WECS. The mean power output calculated through the use of a quasi-dynamic or chronological method, that is to say using time-series of wind speed data and the power versus wind speed characteristic of the wind turbine, serves as the reference. The suitability of the distributions is judged from the adjusted R{sup 2} statistic (R{sub a}{sup 2}). Hourly mean wind speeds recorded at 16 weather stations located in the Canarian Archipelago, an extensive catalogue of wind-speed probability models and two wind turbines of 330 and 800 kW rated power are used in this paper. Among the general conclusions obtained, the following can be pointed out: (a) that the R{sub a}{sup 2} statistic might be useful as an initial gross indicator of the relative error made in the mean annual power output estimation of a WECS when a probabilistic method is employed; (b) the relative errors tend to decrease, in accordance with a trend line defined by a second-order polynomial, as R{sub a}{sup 2} increases. (author)
Static methods which are based on statistical techniques to estimate the mean power output of a WECS (wind energy conversion system) have been widely employed in the scientific literature related to wind energy. In the static method which we use in this paper, for a given wind regime probability distribution function and a known WECS power curve, the mean power output of a WECS is obtained by resolving the integral, usually using numerical evaluation techniques, of the product of these two functions. In this paper an analysis is made of the influence of the level of fit between an empirical probability density function of a sample of wind speeds and the probability density function of the adjusted theoretical model on the relative error ε made in the estimation of the mean annual power output of a WECS. The mean power output calculated through the use of a quasi-dynamic or chronological method, that is to say using time-series of wind speed data and the power versus wind speed characteristic of the wind turbine, serves as the reference. The suitability of the distributions is judged from the adjusted R2 statistic (Ra2). Hourly mean wind speeds recorded at 16 weather stations located in the Canarian Archipelago, an extensive catalogue of wind-speed probability models and two wind turbines of 330 and 800 kW rated power are used in this paper. Among the general conclusions obtained, the following can be pointed out: (a) that the Ra2 statistic might be useful as an initial gross indicator of the relative error made in the mean annual power output estimation of a WECS when a probabilistic method is employed; (b) the relative errors tend to decrease, in accordance with a trend line defined by a second-order polynomial, as Ra2 increases
基于新概率密度函数的ICA盲源分离%ICA Blind Signal Separation Based on a New Probability Density Function
张娟娟; 邸双亮
2014-01-01
This paper is concerned with the blind source separation (BSS) problem of super-Gaussian and sub-Gaussian mixed signal by using the maximum likelihood method, which is based on independent component analysis (ICA) method. In this paper, we construct a new type of probability density function (PDF) which is different from the already existing PDF used to separate mixed signals in the previously published papers. Applying the new constructed PDF to estimate probability density of super-Gaussian and sub-Gaussian signals (assuming the source signals are independent of each other), it is not necessary to change the parameter values artificially, and the separation work may be performed adaptively. Numerical experiments verify the feasibility of the newly constructed PDF, and the convergence time and the separation effect are improved compared with the original algorithm.%基于独立分量分析(Independent Component Analysis, ICA)，利用极大似然估计法，研究了超高斯和亚高斯的混合信号的盲源分离(Blind Sources Separation, BSS)问题。文中构造了一种新的、不同于以往文章中用来分离混合信号的概率密度函数(Probability Density Function, PDF)。新构造的PDF无需改变函数中的参数值，可用来对于超高斯和亚高斯信号的概率密度进行估计(假设未知源信号是相互独立的)。数值实验验证了新构造的PDF的可行性，与原算法相比，收敛时间和分离效果都得到了较大的改善。
The authors construct probability density functions for signal and background events in multi-dimensional space, using Monte Carlo samples. A variant of the Bayes' discriminant function is then applied to classify signal and background events. The effect of some kinematic quantities on the performance of the discriminant has been studied and the results of applying the PDE method to search for the top quark in D0 data (p bar p collisions at √s = 1.8 TeV) will be presented
Electron-positron annihilation rates calculated directly from the electron and positron densities are known to underestimate the true annihilation rate. A correction factor, known as the enhancement factor, allows for the local increase of the electron density around the positron caused by the attractive electron-positron interaction. Enhancement factors are given for positrons annihilating with the 1s electron in H, He+, He, Li2+, and Li+. The enhancement factor for a free positron annihilating with He+ and He is found to be close to that of ortho-positronium (i.e., Ps in its triplet state) annihilating with these atoms. The enhancement factor for Ps-He scattering is used in conjunction with the known annihilation rate for pickoff annihilation to derive a scattering length of 1.47a0 for Ps-He scattering. Further, enhancement factors for e+-Ne and e+-Ar annihilation are used in conjunction with the pickoff annihilation rate to estimate scattering lengths of 1.46a0 for Ps-Ne scattering and 1.75a0 for Ps-Ar scattering
Tremblin, P; Minier, V; Didelon, P; Hill, T; Anderson, L D; Motte, F; Zavagno, A; André, Ph; Arzoumanian, D; Audit, E; Benedettini, M; Bontemps, S; Csengeri, T; Di Francesco, J; Giannini, T; Hennemann, M; Luong, Q Nguyen; Marston, A P; Peretto, N; Rivera-Ingraham, A; Russeil, D; Rygl, K L J; Spinoglio, L; White, G J
2014-01-01
Ionization feedback should impact the probability distribution function (PDF) of the column density around the ionized gas. We aim to quantify this effect and discuss its potential link to the Core and Initial Mass Function (CMF/IMF). We used in a systematic way Herschel column density maps of several regions observed within the HOBYS key program: M16, the Rosette and Vela C molecular cloud, and the RCW 120 H ii region. We fitted the column density PDFs of all clouds with two lognormal distributions, since they present a double-peak or enlarged shape in the PDF. Our interpretation is that the lowest part of the column density distribution describes the turbulent molecular gas while the second peak corresponds to a compression zone induced by the expansion of the ionized gas into the turbulent molecular cloud. The condensations at the edge of the ionized gas have a steep compressed radial profile, sometimes recognizable in the flattening of the power-law tail. This could lead to an unambiguous criterion able t...
Dark Matter Annihilation at the Galactic Center
Linden, Timothy Ryan [Univ. of California, Santa Cruz, CA (United States)
2013-06-01
Observations by the WMAP and PLANCK satellites have provided extraordinarily accurate observations on the densities of baryonic matter, dark matter, and dark energy in the universe. These observations indicate that our universe is composed of approximately ve times as much dark matter as baryonic matter. However, e orts to detect a particle responsible for the energy density of dark matter have been unsuccessful. Theoretical models have indicated that a leading candidate for the dark matter is the lightest supersymmetric particle, which may be stable due to a conserved R-parity. This dark matter particle would still be capable of interacting with baryons via weak-force interactions in the early universe, a process which was found to naturally explain the observed relic abundance of dark matter today. These residual annihilations can persist, albeit at a much lower rate, in the present universe, providing a detectable signal from dark matter annihilation events which occur throughout the universe. Simulations calculating the distribution of dark matter in our galaxy almost universally predict the galactic center of the Milky Way Galaxy (GC) to provide the brightest signal from dark matter annihilation due to its relative proximity and large simulated dark matter density. Recent advances in telescope technology have allowed for the rst multiwavelength analysis of the GC, with suitable e ective exposure, angular resolution, and energy resolution in order to detect dark matter particles with properties similar to those predicted by the WIMP miracle. In this work, I describe ongoing e orts which have successfully detected an excess in -ray emission from the region immediately surrounding the GC, which is di cult to describe in terms of standard di use emission predicted in the GC region. While the jury is still out on any dark matter interpretation of this excess, I describe several related observations which may indicate a dark matter origin. Finally, I discuss the
Monomer Migration and Annihilation Processes
KE Jian-Hong; LIN Zhen-Quan; ZHUANG You-Yi
2005-01-01
We propose a two-species monomer migration-annihilation model, in which monomer migration reactions occur between any two aggregates of the same species and monomer annihilation reactions occur between two different species. Based on the mean-field rate equations, we investigate the evolution behaviors of the processes. For the case with an annihilation rate kernel proportional to the sizes of the reactants, the aggregation size distribution of either species approaches the modified scaling form in the symmetrical initial case, while for the asymmetrical initial case the heavy species with a large initial data scales according to the conventional form and the light one does not scale. Moreover,at most one species can survive finally. For the case with aconstant annihilation rate kernel, both species may scale according to the conventional scaling law in the symmetrical case and survive together at the end.
Positron annihilation studies of organic superconductivity
The positron lifetimes of two organic superconductors, κ-(ET)2Cu(NCS)2 and κ-(ET)2Cu[N(CN)2]Br, are measured as a function of temperature across Tc. A drop of positron lifetime below Tc is observed. Positron-electron momentum densities are measured by using 2D-ACAR to search for the Fermi surface in κ-(ET)2Cu[N(CN)2]Br. Positron density distributions and positron-electron overlaps are calculated by using the orthogonalized linear combination atomic orbital (OLCAO) method to interprete the temperature dependence due to the local charge transfer which is inferred to relate to the superconducting transition. 2D-ACAR results in κ-(ET)2Cu[N(CN)2]Br are compared with theoretical band calculations based on a first-principles local density approximation. Importance of performing accurate band calculations for the interpretation of positron annihilation data is emphasized
Jean, Y.C. [Department of Chemistry, University of Missouri-Kansas City, 205 Spenscer Chemistry Building, 5009 Rockhill Road, Kansas City, MO 64110 (United States)]. E-mail: jeany@umkc.edu; Li Ying [Department of Chemistry, University of Missouri-Kansas City, 205 Spenscer Chemistry Building, 5009 Rockhill Road, Kansas City, MO 64110 (United States); Liu Gaung [Department of Chemistry, University of Missouri-Kansas City, 205 Spenscer Chemistry Building, 5009 Rockhill Road, Kansas City, MO 64110 (United States); Chen, Hongmin [Department of Chemistry, University of Missouri-Kansas City, 205 Spenscer Chemistry Building, 5009 Rockhill Road, Kansas City, MO 64110 (United States); Zhang Junjie [Department of Chemistry, University of Missouri-Kansas City, 205 Spenscer Chemistry Building, 5009 Rockhill Road, Kansas City, MO 64110 (United States); Gadzia, Joseph E. [Dermatology, Department of Internal Medicine, University of Kansas Medical Center, Kansas City, KS 66103 (United States); Kansas Medical Clinic, Topeka, KS 66614 (United States)
2006-02-28
Slow positrons and positron annihilation spectroscopy (PAS) have been applied to medical research in searching for positron annihilation selectivity to cancer cells. We report the results of positron lifetime and Doppler broadening energy spectroscopies in human skin samples with and without cancer as a function of positron incident energy (up to 8 {mu}m depth) and found that the positronium annihilates at a significantly lower rate and forms at a lower probability in the samples having either basal cell carcinoma (BCC) or squamous cell carcinoma (SCC) than in the normal skin. The significant selectivity of positron annihilation to skin cancer may open a new research area of developing positron annihilation spectroscopy as a novel medical tool to detect cancer formation externally and non-invasively at the early stages.
Modeling of positron states and annihilation in solids
Theoretical models and computational aspects to describe positron states and to predict positron annihilation characteristics in solids are discussed. The comparison of the calculated positron lifetimes, core annihilation lineshapes, and two-dimensional angular correlation maps with experimental results are used in identifying the structure (including the chemical composition) of vacancy-type defects and their development e.g. during thermal annealing. The basis of the modeling is the two-component density-functional theory. The ensuing approximations and the state-of-the-art electronic-structure computation methods enable practical schemes with a quantitative predicting power. (author)
Relativistic hydrodynamics, heavy ion reactions and antiproton annihilation
The application of relativistic hydrodynamics to relativistic heavy ions and antiproton annihilation is summarized. Conditions for validity of hydrodynamics are presented. Theoretical results for inclusive particle spectra, pion production and flow analysis are given for medium energy heavy ions. The two-fluid model is introduced and results presented for reactions from 800 MeV per nucleon to 15 GeV on 15 GeV per nucleon. Temperatures and densities attained in antiproton annihilation are given. Finally, signals which might indicate the presence of a quark-gluon plasma are briefly surveyed
The Derivation of the Probability Density Function of the t Distribution%t 分布概率密度的分析
彭定忠; 张映辉; 刘朝才
2012-01-01
t 分布是数理统计中应用广泛的3个重要分布之一，大多数教材没有或仅用直接法推导其概率密度，本文采用变换法推导，简化了运算过程，降低了计算难度。% The t distribution is one of three most important distributions which are applied widely in mathematically statistical analysis, most of the teaching material not including or only use the direct method to derivate the probability density function of the distribution. In this paper, the transform method which features a more simple operation and less difficult computation is presented for derivation.
We have developed a theoretical model of photoinduced reactions on metal surfaces initiated by the substrate/indirect excitation mechanism using the nonequilibrium Green's function approach. We focus on electron transfer, which consists of (1) electron-hole pair creation, (2) transport of created hot electrons, and (3) tunneling of hot electrons to form an anion resonance. We assume that steps (1), (2), and (3) are separable. By this assumption, the electron dynamics might be restated as a tunneling problem of an open system. Combining the Keldysh time-independent formalism with the simple transport theory introduced by Berglund and Spicer, we present a practical scheme for first-principle calculation of the reaction probability as a function of incident photon energy. The method is illustrated by application to the photoinduced desorption/dissociation of O2 on a Ag(110) surface by adopting density functional theory
Lexicographic probability, conditional probability, and nonstandard probability
Halpern, Joseph Y.
2003-01-01
The relationship between Popper spaces (conditional probability spaces that satisfy some regularity conditions), lexicographic probability systems (LPS's), and nonstandard probability spaces (NPS's) is considered. If countable additivity is assumed, Popper spaces and a subclass of LPS's are equivalent; without the assumption of countable additivity, the equivalence no longer holds. If the state space is finite, LPS's are equivalent to NPS's. However, if the state space is infinite, NPS's are ...
The influence of antioxidant on positron annihilation in polypropylene
The purpose of this report is to check the influence of the carbonyl groups (CG), created by oxygen naturally dissolved in a polymer matrix and by the source irradiation, on annihilation characteristics of free positrons using the positron annihilation lifetime spectroscopy (PALS) and coincidence Doppler-broadening spectroscopy (CDBS). Positron annihilation in a pure polypropylene (PP) and in an antioxidant-containing polypropylene (PPA) sample at room and low temperatures has been studied by CDBS. PALS has been used as an o-Ps (orth-positronium) formation monitor. The momentum density distributions of electrons obtained by CDBS at the beginning of measurements have been compared to that at the o-Ps intensity saturation level. It has been shown that the initial concentration of carbonyl groups in a PP sample is high, while for an antioxidant-containing sample, PPA, carbonyl groups are not detected by CDBS. CDBS spectra for a PP can be explained by annihilation of free positrons with the oxygen contained in the carbonyl groups. For a PPA sample, no significant contribution of annihilation with oxygen core electrons can be concluded. (Y. Kazumata)
First star formation with dark matter annihilation
Ripamonti, E; Ferrara, A; Schneider, R; Bressan, A; Marigo, P
2010-01-01
We study the effects of WIMP Dark Matter Annihilations (DMAs) on the evolution of primordial gas clouds hosting the first stars. We follow the collapse of gas and DM within a 1e6 Msun halo virializing at redshift z=20, from z=1000 to slightly before the formation of a hydrostatic core, properly including gas heating/cooling and chemistry processes induced by DMAs, and exploring the dependency of the results on different parameters (DM particle mass, self-annihilation cross section, gas opacity, feedback strength). Independently of such parameters, when the central baryon density, n_c, is lower than the critical density, n_crit ~1e9-1e13 #/cm^3, corresponding to a model-dependent balance between DMA energy input and gas cooling rate, DMA ionizations catalyze an increase in the H2 abundance by a factor ~100. The increased cooling moderately reduces the temperature (by ~30%) but does not significantly reduce the fragmentation mass scale. For n_c > n_crit, the DMA energy injection exceeds the cooling, with the ex...
Weak annihilation cusp inside the dark matter spike about a black hole
Shapiro, Stuart L.; Shelton, Jessie
2016-01-01
We reinvestigate the effect of annihilations on the distribution of collisionless dark matter (DM) in a spherical density spike around a massive black hole. We first construct a very simple, pedagogic, analytic model for an isotropic phase space distribution function that accounts for annihilation and reproduces the "weak cusp" found by Vasiliev for DM deep within the spike and away from its boundaries. The DM density in the cusp varies as $r^{-1/2}$ for $s$-wave annihilation, where $r$ is th...
Development of a pico-second life-time spectrometer for positron annihilation studies
Positron annihilation technique is a sensitive probe to investigate various physico-chemical phenomena due to the ability to provide information about the electron momentum and density in any medium. While measurements on the Doppler broadening and angular correlation of annihilation photons provide information about the electron momentum, the electron density at the annihilation site is obtained, by the positron life-time measurement. This report describes the development, optimization and calibration of a high resolution life-time spectrometer (FWHM=230 ps), based on fast-fast coincidence technique, a relatively new concept in nuclear timing spectroscopy. (author). 4 refs., 9 figs., 1 tab
Dutta, Bhaskar; Ghosh, Tathagata; Strigari, Louis E
2015-01-01
Many particle dark matter models predict that the dark matter undergoes cascade annihilations, i.e. the annihilation products are 4-body final states. In the context of model-independent cascade annihilation models, we study the compatibility of the dark matter interpretation of the Fermi- LAT Galactic center gamma-ray emission with null detections from dwarf spheroidal galaxies. For canonical values of the Milky Way density profile and the local dark matter density, we find that the dark matter interpretation to the Galactic center emission is strongly constrained. However, uncertainties in the dark matter distribution weaken the constraints and leave open dark matter interpretations over a wide range of mass scales.
Sommerfeld enhancement of invisible dark matter annihilation in galaxies and galaxy clusters
Chan, Man Ho
2016-01-01
Recent observations indicate that core-like dark matter structures exist in many galaxies, while numerical simulations reveal a singular dark matter density profile at the center. In this article, I show that if the annihilation of dark matter particles gives invisible sterile neutrinos, the Sommerfeld enhancement of the annihilation cross-section can give a sufficiently large annihilation rate to solve the core-cusp problem. The resultant core density, core radius, and their scaling relation generally agree with recent empirical fits from observations. Also, this model predicts that the resultant core-like structures in dwarf galaxies can be easily observed, but not for large normal galaxies and galaxy clusters.
Sommerfeld enhancement of invisible dark matter annihilation in galaxies and galaxy clusters
Chan, Man Ho
2016-07-01
Recent observations indicate that core-like dark matter structures exist in many galaxies, while numerical simulations reveal a singular dark matter density profile at the center. In this article, I show that if the annihilation of dark matter particles gives invisible sterile neutrinos, the Sommerfeld enhancement of the annihilation cross-section can give a sufficiently large annihilation rate to solve the core-cusp problem. The resultant core density, core radius, and their scaling relation generally agree with recent empirical fits from observations. Also, this model predicts that the resultant core-like structures in dwarf galaxies can be easily observed, but not for large normal galaxies and galaxy clusters.
Slater, Paul B
2010-01-01
The nonnegativity of the determinant of the partial transpose of a two-qubit (4 x 4) density matrix is both a necessary and sufficient condition for its separability. While the determinant is restricted to the interval [0,1/256], the determinant of the partial transpose can range over [-1/16,1/256], with negative values corresponding to entangled states. We report here the exact values of the first nine moments of the probability distribution of the partial transpose over this interval, with respect to the Hilbert-Schmidt (metric volume element) measure on the nine-dimensional convex set of real two-qubit density matrices. Rational functions C_{2 j}(m), yielding the coefficients of the 2j-th power of even polynomials occurring at intermediate steps in our derivation of the m-th moment, emerge. These functions possess poles at finite series of consecutive half-integers (m=-3/2,-1/2,...,(2j-1)/2), and certain (trivial) roots at finite series of consecutive natural numbers (m=0, 1,...). Additionally, the (nontri...
Defects in different types of crystalline and fused quartz have been studied by conventional coincidence positron annihilation and optical absorption technique before and after 60Co gamma irradiation with 500 krad, 2 Mrad and 15.8 Mrad. Samples of synthetic powdered quartz (SPQ), natural quartz (NQ), low-OH synthetic monocrystal quartz (LSMQ), high-OH-fused quartz (HFQ) and low-OH fused quartz (LFQ) have been investigated. Two- and three-component analysis of the positron lifetime spectra have been applied. Data on lifetime (τ), intensities (I) and mean lifetimes have been obtained by exponential fitting of spectra. In non-irradiated SPQ and LSMQ big differences in the values of I2 (1.53% vs. 16.0%) and τ2 (1460 ps vs. 478 ps) have been noticed. This is explained by an increased number of dislocations in the synthetic quartz. The τ2 is interpreted as apparent mixed lifetime of two pick-off annihilation of oPs and positron annihilation in micro cracks. The values of τ1 in HFQ (178 ps) and in LHQ (173 ps) are attributed to positron annihilation in small crystalline areas in the glass. Because of the sharp increase in Ps formation probability in amorphous state, the longest component intensity I3 in these samples is of the order of 50%. After gamma irradiation, a creation of coloured centres has been observed only in SPQ and LFQ., which is connected with Al substitutional impurity. The newly detected diffused band at 215 nm in UV-spectra of irradiated LFQ is attributed to a positively charged oxygen vacancy (E'1 centre) which explains the lack of difference between the parameters of irradiated and non irradiated LFQ. The increased mean positron lifetime of irradiated HFQ is ascribed to creation of negatively charged defects able to trap positrons. Except for HFQ, all samples have surprisingly shown a decrease, although slight, in their mean positron lifetime values after low dose irradiation. The authors ascribe this to possible self-annealing of some defects due
Biological effectiveness of antiproton annihilation
Holzscheiter, Michael H.; Bassler, Niels; Beyer, Gerd; De Marco, John J.; Doser, Michael; Ichioka, Toshiyasu; Iwamoto, Keisuke S.; Knudsen, Helge V.; Landua, Rolf; Maggiore, Carl; McBride, William H.; Møller, Søren Pape; Petersen, Jorgen; Smathers, James B.; Skarsgard, Lloyd D.; Solberg, Timothy D.; Uggerhøj, Ulrik I.; Withers, H.Rodney; Vranjes, Sanja; Wong, Michelle; Wouters, Bradly G.
2004-01-01
We describe an experiment designed to determine whether or not the densely ionizing particles emanating from the annihilation of antiprotons produce an increase in “biological dose” in the vicinity of the narrow Bragg peak for antiprotons compared to protons. This experiment is the first direct measurement of the biological effects of antiproton annihilation. The experiment has been approved by the CERN Research Board for running at the CERN Antiproton Decelerator (AD) as AD-4/ACE (Antiproton Cell Experiment) and has begun data taking in June of 2003. The background, description and the current status of the experiment are given.
Positron Annihilation 3-D Momentum Spectrometry by Synchronous 2D-ACAR and DBAR
Burggraf, Larry W.; Bonavita, Angelo M.; Williams, Christopher S.; Fagan-Kelly, Stefan B.; Jimenez, Stephen M.
2015-05-01
A positron annihilation spectroscopy system capable of determining 3D electron-positron (e--e+) momentum densities has been constructed and tested. In this technique two opposed HPGe strip detectors measure angular coincidence of annihilation radiation (ACAR) and Doppler broadening of annihilation radiation (DBAR) in coincidence to produce 3D momentum datasets in which the parallel momentum component obtained from the DBAR measurement can be selected for annihilation events that possess a particular perpendicular momentum component observed in the 2D ACAR spectrum. A true 3D momentum distribution can also be produced. Measurement of 3-D momentum spectra in oxide materials has been demonstrated including O-atom defects in 6H SiC and silver atom substitution in lithium tetraborate crystals. Integration of the 3-D momentum spectrometer with a slow positron beam for future surface resonant annihilation spectrometry measurements will be described. Sponsorship from Air Force Office of Scientific Research
D-brane scattering and annihilation
D'Amico, Guido; Kleban, Matthew; Schillo, Marjorie
2014-01-01
We study the dynamics of parallel brane-brane and brane-antibrane scattering in string theory in flat spacetime, focusing on the pair production of open strings that stretch between the branes. We are particularly interested in the case of scattering at small impact parameter $b < l_s$, where there is a tachyon in the spectrum when a brane and an antibrane approach within a string length. Our conclusion is that despite the tachyon, branes and antibranes can pass through each other with only a very small probability of annihilating, so long as $g_s$ is small and the relative velocity $v$ is neither too small nor too close to 1. Our analysis is relevant also to the case of charged open string production in world-volume electric fields, and we make use of this T-dual scenario in our analysis. We briefly discuss the application of our results to a stringy model of inflation involving moving branes.
Biological effectiveness of antiproton annihilation
Holzscheiter, M.H.; Agazaryan, N.; Bassler, Niels;
2004-01-01
We describe an experiment designed to determine whether or not the densely ionizing particles emanating from the annihilation of antiprotons produce an increase in ‘‘biological dose’’ in the vicinity of the narrow Bragg peak for antiprotons compared to protons. This experiment is the first direct...
Venturi, D.; Karniadakis, G. E.
2012-08-01
By using functional integral methods we determine new evolution equations satisfied by the joint response-excitation probability density function (PDF) associated with the stochastic solution to first-order nonlinear partial differential equations (PDEs). The theory is presented for both fully nonlinear and for quasilinear scalar PDEs subject to random boundary conditions, random initial conditions or random forcing terms. Particular applications are discussed for the classical linear and nonlinear advection equations and for the advection-reaction equation. By using a Fourier-Galerkin spectral method we obtain numerical solutions of the proposed response-excitation PDF equations. These numerical solutions are compared against those obtained by using more conventional statistical approaches such as probabilistic collocation and multi-element probabilistic collocation methods. It is found that the response-excitation approach yields accurate predictions of the statistical properties of the system. In addition, it allows to directly ascertain the tails of probabilistic distributions, thus facilitating the assessment of rare events and associated risks. The computational cost of the response-excitation method is order magnitudes smaller than the one of more conventional statistical approaches if the PDE is subject to high-dimensional random boundary or initial conditions. The question of high-dimensionality for evolution equations involving multidimensional joint response-excitation PDFs is also addressed.
By using functional integral methods we determine new evolution equations satisfied by the joint response-excitation probability density function (PDF) associated with the stochastic solution to first-order nonlinear partial differential equations (PDEs). The theory is presented for both fully nonlinear and for quasilinear scalar PDEs subject to random boundary conditions, random initial conditions or random forcing terms. Particular applications are discussed for the classical linear and nonlinear advection equations and for the advection–reaction equation. By using a Fourier–Galerkin spectral method we obtain numerical solutions of the proposed response-excitation PDF equations. These numerical solutions are compared against those obtained by using more conventional statistical approaches such as probabilistic collocation and multi-element probabilistic collocation methods. It is found that the response-excitation approach yields accurate predictions of the statistical properties of the system. In addition, it allows to directly ascertain the tails of probabilistic distributions, thus facilitating the assessment of rare events and associated risks. The computational cost of the response-excitation method is order magnitudes smaller than the one of more conventional statistical approaches if the PDE is subject to high-dimensional random boundary or initial conditions. The question of high-dimensionality for evolution equations involving multidimensional joint response-excitation PDFs is also addressed.
SantiagoLain; RicardoAliod
2000-01-01
A statistical formalism overcoming some conceptual and practical difficulties arising in existing two-phase flow (2PHF) mathematical modelling has been applied to propose a model for dilute 2PHF turbulent flows.Phase interaction terms with a clear physical meaning enter the equations and the formalism provides some guidelines for the avoidance of closure assumptions or the rational approximation of these terms. Continuous phase averaged continuity, momentum, turbulent kinetic energy and turbulence dissipation rate equations have been rigorously and systematically obtained in a single step. These equations display a structure similar to that for single-phase flows.It is also assumed that dispersed phase dynamics is well described by a probability density function (pdf) equation and Eulerian continuity, momentum and fluctuating kinetic energy equations for the dispersed phase are deduced.An extension of the standard k-c turbulence model for the continuous phase is used. A gradient transport model is adopted for the dispersed phase fluctuating fluxes of momentum and kinetic energy at the non-colliding, large inertia limit. This model is then used to predict the behaviour of three axisymmetric turbulent jets of air laden with solid particles varying in size and concentration. Qualitative and quantitative numerical predictions compare reasonably well with the three different sets of experimental results, studying the influence of particle size, loading ratio and flow confinement velocity.
陆宏伟; 陈亚珠; 卫青
2004-01-01
Probability density function (PDF) method is proposed for analysing the structure of the reconstructed attractor in computing the correlation dimensions of RR intervals of ten normal old men.PDF contains important information about the spatial distribution of the phase points in the reconstructed attractor.To the best of our knowledge, it is the first time that the PDF method is put forward for the analysis of the reconstructed attractor structure.Numerical simulations demonstrate that the cardiac systems of healthy old men are about 6-6.5 dimensional complex dynamical systems.It is found that PDF is not symmetrically distributed when time delay is small, while PDF satisfies Gaussian distribution when time delay is big enough.A cluster effect mechanism is presented to explain this phenomenon.By studying the shape of PDFs, that the roles played by time delay are more important than embedding dimension in the reconstruction is clearly indicated.Results have demonstrated that the PDF method represents a promising numerical approach for the observation of the reconstructed attractor structure and may provide more information and new diagnostic potential of the analyzed cardiac system.
Topa, M. E.; De Paola, F.; Giugni, M.; Kombe, W.; Touré, H.
2012-04-01
The dynamic of hydro-climatic processes can fluctuate in a wide range of temporal scales. Such fluctuations are often unpredictable for ecosystems and the adaptation to these represent the great challenge for the survival and the stability of the species. An unsolved issue is how much these fluctuations of climatic variables to different temporal scales can influence the frequency and the intensity of the extreme events, and how much these events can modify the ecosystems life. It is by now widespread that an increment of the frequency and the intensity of the extreme events will represent one of the strongest characteristic of the global climatic change, with the greatest social and biotics implications (Porporato et al 2006). Recent field experiments (Gutshick and BassiriRad, 2003) and numerical analysis (Porporato et al 2004) have shown that the extreme events can generate not negligible consequences on organisms of water-limited ecosystems. Adaptation measures and species and ecosystems answers to the hydro-climatic variations, is therefore srongly interconnected to the probabilistic structure of these fluctuations. Generally the not-linear intermittent dynamic of a state variable z (a rainfall depth or the interarrival time between two storms), at short time scales (for example daily) is described by a probability density function (pdf), p (z|υ), where υ is the parameter of the distribution. If the same parameter υ varies so that the external forcing fluctuates at longer temporal scale, z reaches a new "local" equilibrium. When the temporal scale of the variation of υ is larger than the one of z, the probability distribution of z can be obtained as a overlapping of the temporary equlibria ("Superstatistic" approach), i.e.: p(z) = ∫ p(z|υ)·φ(υ)dυ (1) where p(z|υ) is the conditioned probability of z to υ, while φ(υ) is the pdf of υ (Beck, 2001; Benjamin and Cornell, 1970). The present work, carried out within FP7-ENV-2010 CLUVA (CLimate Change
Probability in quantum mechanics
J. G. Gilson
1982-01-01
Full Text Available By using a fluid theory which is an alternative to quantum theory but from which the latter can be deduced exactly, the long-standing problem of how quantum mechanics is related to stochastic processes is studied. It can be seen how the Schrödinger probability density has a relationship to time spent on small sections of an orbit, just as the probability density has in some classical contexts.
On the inclusive annihilation of polarized e+e--pair with two observed hadrons
The general consideration of the inclusive annihilation of polarized e+e--pair with two observed hadrons in final state (e+e-→h1h2X) is carried out. The annihilation cross section is expressed in terms of five structure functions describing the transition γ*→h1h2X. The partial widths of the corresponding decay of a virtual photon for different polarizations of the photon are also introduced and the annihilation cross section is written through these widths. The density matrix of the virtual photon and its polarizational multipole moments are given as well
Effect of positron-atom interactions on the annihilation gamma spectra of molecules
Green, D G; Wang, F; Gribakin, G F; Surko, C M
2012-01-01
Calculations of gamma spectra for positron annihilation on a selection of molecules, including methane and its fluoro-substitutes, ethane, propane, butane and benzene are presented. The annihilation gamma spectra characterise the momentum distribution of the electron-positron pair at the instant of annihilation. The contribution to the gamma spectra from individual molecular orbitals is obtained from electron momentum densities calculated using modern computational quantum chemistry density functional theory tools. The calculation, in its simplest form, effectively treats the low-energy (thermalised, room-temperature) positron as a plane wave and gives annihilation gamma spectra that are about 40% broader than experiment, although the main chemical trends are reproduced. We show that this effective "narrowing" of the experimental spectra is due to the action of the molecular potential on the positron, chiefly, due to the positron repulsion from the nuclei. It leads to a suppression of the contribution of smal...
New Limits on Thermally annihilating Dark Matter from Neutrino Telescopes
Lopes, José
2016-01-01
We used a consistent and robust solar model to obtain upper limits placed by neutrino telescopes, such as Ice- Cube and Super-Kamiokande, on the Dark Matter-nucleon scattering cross-section, for a general model of Dark Matter with a velocity dependent (p-wave) thermally averaged cross-section. In this picture, the Boltzmann equation for the Dark Matter abundance is numerically solved satisfying the Dark Matter density measured from the Cosmic Microwave Background (CMB). We show that for lower cross-sections and higher masses, the Dark Matter annihilation rate drops sharply, resulting in upper bounds on the scattering cross-section one order of magnitude above those derived from a velocity independent (s-wave) annihilation cross-section. Our results show that upper limits on the scattering cross-section obtained from Dark Matter annihilating in the Sun are sensible to the uncertainty in current standard solar models, fluctuating a maximum of 20 % depending on the annihilation channel.
Antiproton annihilation in quantum chromodynamics
Anti-proton annihilation has a number of important advantages as a probe of QCD in the low energy domain. Exclusive reaction in which complete annihilation of the valance quarks occur. There are a number of exclusive and inclusive /bar p/ reactions in the intermediate momentum transfer domain which provide useful constraints on hadron wavefunctions or test novel features of QCD involving both perturbative and nonperturbative dynamics. Inclusive reactions involving antiprotons have the advantage that the parton distributions are well understood. In these lectures, I will particularly focus on lepton pair production /bar p/A → /ell//bar /ell//X as a means to understand specific nuclear features in QCD, including collision broadening, breakdown of the QCD ''target length condition''. Thus studies of low to moderate energy antiproton reactions with laboratory energies under 10 GeV could give further insights into the full structure of QCD. 112 refs., 40 figs
Dark matter annihilation in the gravitational field of a black hole
Baushev, A. N.
2008-01-01
In this paper we consider dark matter particle annihilation in the gravitational field of black holes. We obtain exact distribution function of the infalling dark matter particles, and compute the resulting flux and spectra of gamma rays coming from the objects. It is shown that the dark matter density significantly increases near a black hole. Particle collision energy becomes very high affecting relative cross-sections of various annihilation channels. We also discuss possible experimental ...
A Comprehensive Search for Dark Matter Annihilation in Dwarf Galaxies
Geringer-Sameth, Alex; Walker, Matthew G
2014-01-01
We present a new formalism designed to discover dark matter annihilation occurring in the Milky Way's dwarf galaxies. The statistical framework extracts all available information in the data by simultaneously combining observations of all the dwarf galaxies and incorporating the impact of particle physics properties, the distribution of dark matter in the dwarfs, and the detector response. The method performs maximally powerful frequentist searches and produces confidence limits on particle physics parameters. Probability distributions of test statistics under various hypotheses are constructed exactly, without relying on large sample approximations. The derived limits have proper coverage by construction and claims of detection are not biased by imperfect background modeling. We implement this formalism using data from the Fermi Gamma-ray Space Telescope to search for an annihilation signal in the complete sample of Milky Way dwarfs whose dark matter distributions can be reliably determined. We find that the...
Gudder, Stanley P
2014-01-01
Quantum probability is a subtle blend of quantum mechanics and classical probability theory. Its important ideas can be traced to the pioneering work of Richard Feynman in his path integral formalism.Only recently have the concept and ideas of quantum probability been presented in a rigorous axiomatic framework, and this book provides a coherent and comprehensive exposition of this approach. It gives a unified treatment of operational statistics, generalized measure theory and the path integral formalism that can only be found in scattered research articles.The first two chapters survey the ne
Asmussen, Søren; Albrecher, Hansjörg
, extensions of the classical compound Poisson model to allow for reserve-dependent premiums, Markov-modulation, periodicity, change of measure techniques, phase-type distributions as a computational vehicle and the connection to other applied probability areas, like queueing theory. In this substantially......The book gives a comprehensive treatment of the classical and modern ruin probability theory. Some of the topics are Lundberg's inequality, the Cramér-Lundberg approximation, exact solutions, other approximations (e.g., for heavy-tailed claim size distributions), finite horizon ruin probabilities...... updated and extended second version, new topics include stochastic control, fluctuation theory for Levy processes, Gerber–Shiu functions and dependence....
Shiryaev, Albert N
2016-01-01
This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, the measure-theoretic foundations of probability theory, weak convergence of probability measures, and the central limit theorem. Many examples are discussed in detail, and there are a large number of exercises. The book is accessible to advanced undergraduates and can be used as a text for independent study. To accommodate the greatly expanded material in the third edition of Probability, the book is now divided into two volumes. This first volume contains updated references and substantial revisions of the first three chapters of the second edition. In particular, new material has been added on generating functions, the inclusion-exclusion principle, theorems on monotonic classes (relying on a detailed treatment of “π-λ” systems), and the fundamental theorems of mathematical statistics.
Rubaszek, A. [Polska Akademia Nauk, Wroclaw (Poland). Inst. Niskich Temperatur i Badan Strukturalnych; Szotek, Z.; Temmerman, W.M. [Daresbury Lab., Warrington (United Kingdom)
2001-07-01
Several methods to describe the electron-positron (e-p) correlation effects are used in calculations of positron annihilation characteristics in solids. The weighted density approximation (WDA), giving rise to the non-local, state-selective e-p correlation functions, is applied to calculate positron annihilation rates and e-p momentum densities in a variety of metals and silicon. The WDA results are compared to the results of other methods such as the independent particle model, local density approximation, generalised gradient approximation, and also to experiments. The importance of non-locality and state-dependence of the e-p correlation functions is discussed. (orig.)
Krzemień, K.; Kansy, J.
2008-05-01
The positron annihilation lifetime spectroscopy was used to study correlations between positron annihilation parameters and macroscopic properties in two kinds of polymers from elastomers group. Two kinds of material were investigated: three samples of ethylene octane copolymers (commercial name engage) of different densities and six samples of polybutylene terephtalate-polyether glycol copolymers (hytrel) having different densities. A correlation between intensity of ortho-positronium component and the density (d) of samples was observed for both kinds of material. From the ortho-positronium pick-off lifetime the mean radii (R) of free volume centers were determined. A good linear correlation between R and d was found.
Ultra-high energy cosmic rays: The annihilation of super-heavy relics
We investigate the possibility that ultra-high energy cosmic rays (UHECRs) originate from the annihilation of relic superheavy (SH) dark matter in the Galactic halo. In order to fit the data on UHECRs, a cross section of Aν> ∼ 10-26cm2(Mx/1012 GeV)((3)/(2)) is required if the SH dark matter follows a Navarro-Frenk-White (NFW) density profile. This would require extremely large-l contributions to the annihilation cross section. An interesting finding of our calculation is that the annihilation in sub-galactic clumps of dark matter dominates over the annihilations in the smooth dark matter halo, thus implying much smaller values of the cross section needed to explain the observed fluxes of UHECRs
Dark Stars and Boosted Dark Matter Annihilation Rates
Ilie, Cosmin; Spolyar, Douglas
2010-01-01
Dark Stars (DS) may constitute the first phase of stellar evolution, powered by dark matter (DM) annihilation. We will investigate here the properties of DS assuming the DM particle has the required properties to explain the excess positron and elec- tron signals in the cosmic rays detected by the PAMELA and FERMI satellites. Any possible DM interpretation of these signals requires exotic DM candidates, with an- nihilation cross sections a few orders of magnitude higher than the canonical value required for correct thermal relic abundance for Weakly Interacting Dark Matter can- didates; additionally in most models the annihilation must be preferentially to lep- tons. Secondly, we study the dependence of DS properties on the concentration pa- rameter of the initial DM density profile of the halos where the first stars are formed. We restrict our study to the DM in the star due to simple (vs. extended) adiabatic contraction and minimal (vs. extended) capture; this simple study is sufficient to illustrate depend...
High nuclear temperatures by antimatter-matter annihilation
It is suggested that the quark-gluon phase be created through the use of antiproton or antideuteron beams. The first advantage to this method, using higher energy antiprotons than 1.5 GeV/c, is that the higher momenta antiprotons penetrate more deeply so that mesons produced are more nearly contained within the nucleus. Another advantage is that the annihilation products are very forward-peaked and tend to form a beam of mesons so that the energy density does not disperse very rapidly. Calculations were performed using the intranuclear cascade to try to follow the process of annihilation in some detail. The intranuclear cascade type calculation method is compared to the hydrodynamic approach. 8 refs., 8 figs
High nuclear temperatures by antimatter-matter annihilation
Gibbs, W.R.; Strottman, D.
1985-01-01
It is suggested that the quark-gluon phase be created through the use of antiproton or antideuteron beams. The first advantage to this method, using higher energy antiprotons than 1.5 GeV/c, is that the higher momenta antiprotons penetrate more deeply so that mesons produced are more nearly contained within the nucleus. Another advantage is that the annihilation products are very forward-peaked and tend to form a beam of mesons so that the energy density does not disperse very rapidly. Calculations were performed using the intranuclear cascade to try to follow the process of annihilation in some detail. The intranuclear cascade type calculation method is compared to the hydrodynamic approach. 8 refs., 8 figs. (LEW)
Dark matter distribution and annihilation at the Galactic center
Dokuchaev, V. I.; Eroshenko, Yu N.
2016-02-01
We describe a promising method for measuring the total dark matter mass near a supermassive black hole at the Galactic center based on observations of nonrelativistic precession of the orbits of fast S0 stars. An analytical expression for the precession angle has been obtained under the assumption of a power-law profile of the dark matter density. The awaited weighing of the dark matter at the Galactic center provides the strong constraints on the annihilation signal from the neuralino dark matter particle candidate. The mass of the dark matter necessary for the explanation of the observed excess of gamma-radiation owing to the annihilation of the dark matter particles has been calculated with allowance for the Sommerfeld effect.
Gas Permeations Studied by Positron Annihilation
Yuan, Jen-Pwu; Cao, Huimin; Jean, X.; Yang, Y. C.
1997-03-01
The hole volumes and fractions of PC and PET polymers are measured by positron annihilation lifetime spectroscopy. Direct correlations between the measured hole properties and gas permeabilities are observed. Applications of positron annihilation spectroscopy to study gas transport and separation of polymeric materials will be discussed.
Positron Annihilation in the Bipositronium Ps2
Bailey, David H.; Frolov, Alexei M.
2005-07-01
The electron-positron-pair annihilation in the bipositronium PS2 is considered. In particular, the two-, three-, one- and zero-photon annihilation rates are determined to high accuracy. The corresponding analytical expressions are also presented. Also, a large number of bound state properties have been determined for this system.
Abazajian, Kevork N.; Keeley, Ryan E.
2016-04-01
We incorporate Milky Way dark matter halo profile uncertainties, as well as an accounting of diffuse gamma-ray emission uncertainties in dark matter annihilation models for the Galactic Center Extended gamma-ray excess (GCE) detected by the Fermi Gamma Ray Space Telescope. The range of particle annihilation rate and masses expand when including these unknowns. However, two of the most precise empirical determinations of the Milky Way halo's local density and density profile leave the signal region to be in considerable tension with dark matter annihilation searches from combined dwarf galaxy analyses for single-channel dark matter annihilation models. The GCE and dwarf tension can be alleviated if: one, the halo is very highly concentrated or strongly contracted; two, the dark matter annihilation signal differentiates between dwarfs and the GC; or, three, local stellar density measures are found to be significantly lower, like that from recent stellar counts, increasing the local dark matter density.
Weak annihilation cusp inside the dark matter spike about a black hole
Shapiro, Stuart L.; Shelton, Jessie
2016-06-01
We reinvestigate the effect of annihilations on the distribution of collisionless dark matter (DM) in a spherical density spike around a massive black hole. We first construct a very simple, pedagogic, analytic model for an isotropic phase space distribution function that accounts for annihilation and reproduces the "weak cusp" found by Vasiliev for DM deep within the spike and away from its boundaries. The DM density in the cusp varies as r-1 /2 for s -wave annihilation, where r is the distance from the central black hole, and is not a flat "plateau" profile. We then extend this model by incorporating a loss cone that accounts for the capture of DM particles by the hole. The loss cone is implemented by a boundary condition that removes capture orbits, resulting in an anisotropic distribution function. Finally, we evolve an initial spike distribution function by integrating the Boltzmann equation to show how the weak cusp grows and its density decreases with time. We treat two cases, one for s -wave and the other for p -wave DM annihilation, adopting parameters characteristic of the Milky Way nuclear core and typical WIMP models for DM. The cusp density profile for p -wave annihilation is weaker, varying like ˜r-0.34, but is still not a flat plateau.
New techniques of positron annihilation
Studies on new techniques of positron annihilation and its application to various fields are presented. First, production of slow positron and its characteristic features are described. The slow positron can be obtained from radioisotopes by using a positron moderator, proton beam bombardment on a boron target, and pair production by using an electron linear accelerator. Bright enhancement of the slow positron beam is studied. Polarized positron beam can be used for the study of the momentum distribution of an electron in ferromagnetic substances. Production of polarized positrons and measurements of polarization are discussed. Various phases of interaction between slow positrons and atoms (or molecules) are described. A comparative study of electron scavenging effects on luminescence and on positronium formation in cyclohexane is presented. The positron annihilation phenomena are applicable for the surface study. The microscopic information on the surface of porous material may be obtained. The slow positrons are also useful for the surface study. Production and application of slow muon (positive and negative) are presented in this report. (Kato, T.)
Skyrmion creation and annihilation by spin waves
Liu, Yizhou, E-mail: yliu062@ucr.edu; Yin, Gen; Lake, Roger K., E-mail: rlake@ece.ucr.edu [Department of Electrical and Computer Engineering, University of California, Riverside, California 92521 (United States); Zang, Jiadong [Department of Physics and Material Science Program, University of New Hampshire, Durham, New Hampshire 03824 (United States); Shi, Jing [Department of Physics and Astronomy, University of California, Riverside, California 92521 (United States)
2015-10-12
Single skyrmion creation and annihilation by spin waves in a crossbar geometry are theoretically analyzed. A critical spin-wave frequency is required both for the creation and the annihilation of a skyrmion. The minimum frequencies for creation and annihilation are similar, but the optimum frequency for creation is below the critical frequency for skyrmion annihilation. If a skyrmion already exists in the cross bar region, a spin wave below the critical frequency causes the skyrmion to circulate within the central region. A heat assisted creation process reduces the spin-wave frequency and amplitude required for creating a skyrmion. The effective field resulting from the Dzyaloshinskii-Moriya interaction and the emergent field of the skyrmion acting on the spin wave drive the creation and annihilation processes.
Fermionic Semi-Annihilating Dark Matter
Cai, Yi
2015-01-01
Semi-annihilation is a generic feature of dark matter theories with symmetries larger than Z2. We investigate two examples with multi-component dark sectors comprised of an SU(2)L singlet or triplet fermion besides a scalar singlet. These are respectively the minimal fermionic semi-annihilating model, and the minimal case for a gauge-charged fermion. We study the relevant dark matter phenomenology, including the interplay of semi-annihilation and the Sommerfeld effect. We demonstrate that semi-annihilation in the singlet model can explain the gamma ray excess from the galactic center. For the triplet model we scan the parameter space, and explore how signals and constraints are modified by semi-annihilation. We find that the entire region where the model comprises all the observed dark matter is accessible to current and planned direct and indirect searches.
Nature of chemical bond through positron annihilation
Positron annihilation is an important alternative to Compton scattering for determination of electron momentum distribution. The possibility of studying the nature of chemical bond by positron annihilation technique is reviewed in this paper. General concepts connected with momentum space and chemical bond have been outlined. Estimation of positron wavefunction at carbon and hydrogen sites and the calculation of electron momentum distribution of C-H and C-C bonds are discussed. The annihilation with sigma electrons broadens the angular correlation curve while the annihilation with π electrons narrows the curve. The most significant part of this paper is the investigation of participation of d-orbital of sulphur in chemical bonding. Whether or not ligand perturbation is necessary for d-orbital contraction and consequent participation in bonding is controversial till now. A study of angular correlation of positron annihilation radiation on organic sulphides and sulphones is a direct evidence to conclude that ligand perturbation is necessary. (author)
Farnoosh Basaligheh
2015-12-01
Full Text Available One of the conventional methods for temporary support of tunnels is to use steel sets with shotcrete. The nature of a temporary support system demands a quick installation of its structures. As a result, the spacing between steel sets is not a fixed amount and it can be considered as a random variable. Hence, in the reliability analysis of these types of structures, the selection of an appropriate probability distribution function of spacing of steel sets is essential. In the present paper, the distances between steel sets are collected from an under-construction tunnel and the collected data is used to suggest a proper Probability Distribution Function (PDF for the spacing of steel sets. The tunnel has two different excavation sections. In this regard, different distribution functions were investigated and three common tests of goodness of fit were used for evaluation of each function for each excavation section. Results from all three methods indicate that the Wakeby distribution function can be suggested as the proper PDF for spacing between the steel sets. It is also noted that, although the probability distribution function for two different tunnel sections is the same, the parameters of PDF for the individual sections are different from each other.
Momentum density distributions determined by the analysis of positron annihilation radiation in embedded nano Cu clusters in iron were studied by using a first-principles method. A momentum smearing effect originated from the positron localization in the embedded clusters is observed. The smearing effect is found to scale linearly with the cube root of the cluster's volume, indicating that the momentum density techniques of positron annihilation can be employed to explore the atomic-scaled microscopic structures of a variety of impurity aggregations in materials.
Positron annihilation in transparent ceramics
Husband, P.; Bartošová, I.; Slugeň, V.; Selim, F. A.
2016-01-01
Transparent ceramics are emerging as excellent candidates for many photonic applications including laser, scintillation and illumination. However achieving perfect transparency is essential in these applications and requires high technology processing and complete understanding for the ceramic microstructure and its effect on the optical properties. Positron annihilation spectroscopy (PAS) is the perfect tool to study porosity and defects. It has been applied to investigate many ceramic structures; and transparent ceramics field may be greatly advanced by applying PAS. In this work positron lifetime (PLT) measurements were carried out in parallel with optical studies on yttrium aluminum garnet transparent ceramics in order to gain an understanding for their structure at the atomic level and its effect on the transparency and light scattering. The study confirmed that PAS can provide useful information on their microstructure and guide the technology of manufacturing and advancing transparent ceramics.
Giebink, Noel C.
2015-10-01
Exciton annihilation processes impact both the lifetime and efficiency roll-off of organic light emitting diodes (OLEDs), however it is notoriously difficult to identify the dominant mode of annihilation in operating devices (exciton-exciton vs. exciton-charge carrier) and subsequently to disentangle its magnitude from competing roll-off processes such as charge imbalance. Here, we introduce a simple analytical method to directly identify and extract OLED annihilation rates from standard light-current-voltage (LIV) measurement data. The foundation of this approach lies in a frequency domain EQE analysis and is most easily understood in analogy to impedance spectroscopy, where in this case both the current (J) and electroluminescence intensity (L) are measured using a lock-in amplifier at different harmonics of the sinusoidal dither superimposed on the DC device bias. In the presence of annihilation, the relationship between recombination current and light output (proportional to exciton density) becomes nonlinear, thereby mixing the different EQE harmonics in a manner that depends uniquely on the type and magnitude of annihilation. We derive simple expressions to extract different annihilation rate coefficients and apply this technique to a variety of OLEDs. For example, in devices dominated by triplet-triplet annihilation, the annihilation rate coefficient, K_TT, is obtained directly from the linear slope that results from plotting EQE_DC-EQE_1ω versus L_DC (2EQE_1ω-EQE_DC). We go on to show that, in certain cases it is sufficient to calculate EQE_1ω directly from the slope of the DC light versus current curve [i.e. via (dL_DC)/(dJ_DC )], thus enabling this analysis to be conducted solely from common LIV measurement data.
Positron annihilation lifetime study of oxide dispersion strengthened steels
Krsjak, V.; Szaraz, Z.; Hähner, P.
2012-09-01
A comparative positron annihilation lifetime study has been performed on various commercial ferritic and ferritic/martensitic oxide dispersion strengthened (ODS) steels. Both as-extruded and recrystallized materials were investigated. In the materials with recrystallized coarse-grained microstructures, only the positron trapping at small vacancy clusters and yttria nanofeatures was observed. Materials which had not undergone recrystallization treatment clearly showed additional positron trapping which is associated with dislocations. Dislocation densities were calculated from a two-component decomposition of the positron lifetime spectra by assuming the first component to be a superposition of the bulk controlled annihilation rate and the dislocation controlled trapping rate. The second component (which translates into lifetimes of 240-260 ps) was found to be well separated in all those ODS materials. This paper presents the potentialities and limitations of the positron annihilation lifetime spectroscopy, and discusses the results of the experimental determination of the defect concentrations and sensitivity of this technique to the material degradation due to thermally induced precipitation of chromium-rich α' phases.
Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew
2016-07-01
Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. PMID:27154008
Ekelund, Flemming; Christensen, Søren; Rønn, Regin; Buhl, E.; Jacobsen, C.S.
1999-01-01
An automated modification of the most-probable-number (MPN) technique has been developed for enumeration of phagotrophic protozoa. The method is based on detection of prey depletion in micro titre plates rather than on presence of protozoa. A transconjugant Pseudomonas fluorescens DR54 labelled...... with a luxAB gene cassette was constructed, and used as growth medium for the protozoa in the micro titre plates. The transconjugant produced high amounts of luciferase which was stable and allowed detection for at least 8 weeks. Dilution series of protozoan cultures and soil suspensions were...
Ratio of secondary baryon and meson yields in e+e- annihilation and quark combinatorics
Shown is possibility of application of quark combinatoric counting to check probability of separated out quark transfer into baryons or mesons at joining quark with the sea of quark-antiquark couples. It is shown that p/π+ outlet ratio, calculated in the framework of the quark combinatorics, is coordinated with the data in antiproton and pion birth in the process of e+e- - annihilation. In the authors opinion, quark combinatoric counting forecasts large cross section of Ω- hyperon birth and to check quark combinatorics the study of baryon outlets ratios in e+e- annihilation is necessary
Nonabelian dark matter with resonant annihilation
We construct a model based on an extra gauge symmetry, SU(2)X×U(1)B−L, which can provide gauge bosons to serve as weakly-interacting massive particle dark matter. The stability of the dark matter is naturally guaranteed by a discrete Z2 symmetry that is a subgroup of SU(2)X. The dark matter interacts with standard model fermions by exchanging gauge bosons which are linear combinations of SU(2)X×U(1)B−L gauge bosons. With the appropriate choice of representation for the new scalar multiplet whose vacuum expectation value spontaneously breaks the SU(2)X symmetry, the relation between the new gauge boson masses can naturally lead to resonant pair annihilation of the dark matter. After exploring the parameter space of the new gauge couplings subject to constraints from collider data and the observed relic density, we use the results to evaluate the cross section of the dark matter scattering off nucleons and compare it with data from the latest direct detection experiments. We find allowed parameter regions that can be probed by future direct searches for dark matter and LHC searches for new particles
Positron annihilation in solid and liquid Ni
New techniques have been developed for the study of metals via positron annihilation which provide for the in-situ melting of the samples and subsequent measurements via Doppler broadening of positron-annihilation radiation. Here we report these metods currently in use at our laboratory; ion implantation of 58Co and the use of Al2O3 crucibles for in-situ melting followed by the decomposition of the Doppler-broadened spectrum into a parabolic and a Gaussian component. Our earliest results obtained for pure Ni in the polycrystalline solid and in the liquid state are compared. An interesting similarity is reported for the distributions of the high-momentum (Gaussian) component for positrons annihilating in vacancies at high temperatures and those annihilating in liquid Ni
The promise of e+e- annihilation as an ideal laboratory to test Quantum Chromodynamics, QCD, has been the dominating theme in elementary particle physics during the last several years. An attempt is made to partially survey the subject in deep perturbative region in e+e- annihilation where theoretical ambiguities are minimal. Topics discussed include a review of the renormalization group methods relevant for e+e- annihilation, total hadronic cross section, jets and large-psub(T) phenomena, non-perturbative quark and gluon fragmentation effects and analysis of the jet distributions measured at DORIS, SPEAR and PETRA. My hope is to review realistic tests of QCD in e+e- annihilation - as opposed to the ultimate tests, which abound in literature. (orig.)
Compton Scattering, Pair Annihilation and Pair Production in a Plasma
Krishan, Vinod
1999-01-01
The square of the four momentum of a photon in vacuum is zero. However, in an unmagnetized plasma it is equal to the square of the plasma frequency. Further, the electron-photon coupling vertex is modified in a plasma to include the effect of the plasma medium. I calculate the cross sections of the three processes - the Compton scattering, electron-positron pair annihilation and production in a plasma. At high plasma densities, the cross sections are found to change significantly. Such high p...
Positron annihilation with core and valence electrons
Green, D G
2015-01-01
$\\gamma$-ray spectra for positron annihilation with the core and valence electrons of the noble gas atoms Ar, Kr and Xe is calculated within the framework of diagrammatic many-body theory. The effect of positron-atom and short-range positron-electron correlations on the annihilation process is examined in detail. Short-range correlations, which are described through non-local corrections to the vertex of the annihilation amplitude, are found to significantly enhance the spectra for annihilation on the core orbitals. For Ar, Kr and Xe, the core contributions to the annihilation rate are found to be 0.55\\%, 1.5\\% and 2.2\\% respectively, their small values reflecting the difficulty for the positron to probe distances close to the nucleus. Importantly however, the core subshells have a broad momentum distribution and markedly contribute to the annihilation spectra at Doppler energy shifts $\\gtrsim3$\\,keV, and even dominate the spectra of Kr and Xe at shifts $\\gtrsim5$\\,keV. Their inclusion brings the theoretical ...
The dark matter annihilation boost from low-temperature reheating
Erickcek, Adrienne L.
2015-11-01
The evolution of the Universe between inflation and the onset of big bang nucleosynthesis is difficult to probe and largely unconstrained. This ignorance profoundly limits our understanding of dark matter: we cannot calculate its thermal relic abundance without knowing when the Universe became radiation dominated. Fortunately, small-scale density perturbations provide a probe of the early Universe that could break this degeneracy. If dark matter is a thermal relic, density perturbations that enter the horizon during an early matter-dominated era grow linearly with the scale factor prior to reheating. The resulting abundance of substructure boosts the annihilation rate by several orders of magnitude, which can compensate for the smaller annihilation cross sections that are required to generate the observed dark matter density in these scenarios. In particular, thermal relics with masses less than a TeV that thermally and kinetically decouple prior to reheating may already be ruled out by Fermi-LAT observations of dwarf spheroidal galaxies. Although these constraints are subject to uncertainties regarding the internal structure of the microhalos that form from the enhanced perturbations, they open up the possibility of using gamma-ray observations to learn about the reheating of the Universe.
Collision Probability Analysis
Hansen, Peter Friis; Pedersen, Preben Terndrup
1998-01-01
It is the purpose of this report to apply a rational model for prediction of ship-ship collision probabilities as function of the ship and the crew characteristics and the navigational environment for MS Dextra sailing on a route between Cadiz and the Canary Islands.The most important ship and crew...... characteristics are: ship speed, ship manoeuvrability, the layout of the navigational bridge, the radar system, the number and the training of navigators, the presence of a look out etc. The main parameters affecting the navigational environment are ship traffic density, probability distributions of wind speeds...... probability, i.e. a study of the navigator's role in resolving critical situations, a causation factor is derived as a second step.The report documents the first step in a probabilistic collision damage analysis. Future work will inlcude calculation of energy released for crushing of structures giving a...
Joshua Olusegun Okeniyi; Idemudia Joshua Ambrose; Stanley Okechukwu Okpala; Oluwafemi Michael Omoniyi; Isaac Oluwaseun Oladele; Cleophas Akintoye Loto; Patricia Abimbola Idowu Popoola
2014-06-01
In this study, corrosion test-data of steel-rebar in concrete were subjected to the fittings of the Normal, Gumbel and the Weibull probability distribution functions. This was done to investigate the suitability of the results of the fitted test-data, by these distributions, for modelling the effectiveness of C6H15NO3, triethanolamine (TEA), admixtures on the corrosion of steel-rebar in concrete in NaCl and in H2SO4 test-media. For this, six different concentrations of TEA were admixed in replicates of steel-reinforced concrete samples which were immersed in the saline/marine and the microbial/industrial simulating test-environments for seventy-five days. From these, distribution fittings of the non-destructive electrochemical measurements were subjected to the Kolmogorov–Smirnov goodness-of-fit statistics and to the analyses of variance modelling for studying test-data compatibility to the fittings and testing significance. Although all fittings of test-data followed similar trends of significant testing, the fittings of the corrosion rate test data followed the Weibull more than the Normal and the Gumbel distribution fittings, thus supporting use of the Weibull fittings for modelling effectiveness. The effectiveness models on rebar corrosion, based on these, identified 0.083% TEA with optimal inhibition efficiency, $\\eta =$ 72.17 ± 10.68%, in NaCl medium while 0.667% TEA was the only admixture with positive effectiveness, $\\eta =$ 56.45±15.85%, in H2SO4 medium. These results bear implications on the concentrations of TEA for effective corrosion protection of concrete steel-rebar in saline/marine and in industrial/microbial environments.