WorldWideScience

Sample records for energy probability function

  1. Psychophysics of the probability weighting function

    Science.gov (United States)

    Takahashi, Taiki

    2011-03-01

    A probability weighting function w(p) for an objective probability p in decision under risk plays a pivotal role in Kahneman-Tversky prospect theory. Although recent studies in econophysics and neuroeconomics widely utilized probability weighting functions, psychophysical foundations of the probability weighting functions have been unknown. Notably, a behavioral economist Prelec (1998) [4] axiomatically derived the probability weighting function w(p)=exp(-() (01e)=1e,w(1)=1), which has extensively been studied in behavioral neuroeconomics. The present study utilizes psychophysical theory to derive Prelec's probability weighting function from psychophysical laws of perceived waiting time in probabilistic choices. Also, the relations between the parameters in the probability weighting function and the probability discounting function in behavioral psychology are derived. Future directions in the application of the psychophysical theory of the probability weighting function in econophysics and neuroeconomics are discussed.

  2. Probability density function evolution of power systems subject to stochastic variation of renewable energy

    Science.gov (United States)

    Wei, J. Q.; Cong, Y. C.; Xiao, M. Q.

    2018-05-01

    As renewable energies are increasingly integrated into power systems, there is increasing interest in stochastic analysis of power systems.Better techniques should be developed to account for the uncertainty caused by penetration of renewables and consequently analyse its impacts on stochastic stability of power systems. In this paper, the Stochastic Differential Equations (SDEs) are used to represent the evolutionary behaviour of the power systems. The stationary Probability Density Function (PDF) solution to SDEs modelling power systems excited by Gaussian white noise is analysed. Subjected to such random excitation, the Joint Probability Density Function (JPDF) solution to the phase angle and angular velocity is governed by the generalized Fokker-Planck-Kolmogorov (FPK) equation. To solve this equation, the numerical method is adopted. Special measure is taken such that the generalized FPK equation is satisfied in the average sense of integration with the assumed PDF. Both weak and strong intensities of the stochastic excitations are considered in a single machine infinite bus power system. The numerical analysis has the same result as the one given by the Monte Carlo simulation. Potential studies on stochastic behaviour of multi-machine power systems with random excitations are discussed at the end.

  3. Invariant probabilities of transition functions

    CERN Document Server

    Zaharopol, Radu

    2014-01-01

    The structure of the set of all the invariant probabilities and the structure of various types of individual invariant probabilities of a transition function are two topics of significant interest in the theory of transition functions, and are studied in this book. The results obtained are useful in ergodic theory and the theory of dynamical systems, which, in turn, can be applied in various other areas (like number theory). They are illustrated using transition functions defined by flows, semiflows, and one-parameter convolution semigroups of probability measures. In this book, all results on transition probabilities that have been published by the author between 2004 and 2008 are extended to transition functions. The proofs of the results obtained are new. For transition functions that satisfy very general conditions the book describes an ergodic decomposition that provides relevant information on the structure of the corresponding set of invariant probabilities. Ergodic decomposition means a splitting of t...

  4. Choice probability generating functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel

    2010-01-01

    This paper establishes that every random utility discrete choice model (RUM) has a representation that can be characterized by a choice-probability generating function (CPGF) with specific properties, and that every function with these specific properties is consistent with a RUM. The choice...... probabilities from the RUM are obtained from the gradient of the CPGF. Mixtures of RUM are characterized by logarithmic mixtures of their associated CPGF. The paper relates CPGF to multivariate extreme value distributions, and reviews and extends methods for constructing generating functions for applications....... The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended to competing risk survival models....

  5. COVAL, Compound Probability Distribution for Function of Probability Distribution

    International Nuclear Information System (INIS)

    Astolfi, M.; Elbaz, J.

    1979-01-01

    1 - Nature of the physical problem solved: Computation of the probability distribution of a function of variables, given the probability distribution of the variables themselves. 'COVAL' has been applied to reliability analysis of a structure subject to random loads. 2 - Method of solution: Numerical transformation of probability distributions

  6. Choice Probability Generating Functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel L; Bierlaire, Michel

    This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications....

  7. Choice probability generating functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel

    2013-01-01

    This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications. The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended...

  8. Generalized Probability Functions

    Directory of Open Access Journals (Sweden)

    Alexandre Souto Martinez

    2009-01-01

    Full Text Available From the integration of nonsymmetrical hyperboles, a one-parameter generalization of the logarithmic function is obtained. Inverting this function, one obtains the generalized exponential function. Motivated by the mathematical curiosity, we show that these generalized functions are suitable to generalize some probability density functions (pdfs. A very reliable rank distribution can be conveniently described by the generalized exponential function. Finally, we turn the attention to the generalization of one- and two-tail stretched exponential functions. We obtain, as particular cases, the generalized error function, the Zipf-Mandelbrot pdf, the generalized Gaussian and Laplace pdf. Their cumulative functions and moments were also obtained analytically.

  9. Energy-level scheme and transition probabilities of Si-like ions

    International Nuclear Information System (INIS)

    Huang, K.N.

    1984-01-01

    Theoretical energy levels and transition probabilities are presented for 27 low-lying levels of silicon-like ions from Z = 15 to Z = 106. The multiconfiguration Dirac-Fock technique is used to calculate energy levels and wave functions. The Breit interaction and Lamb shift contributions are calculated perturbatively as corrections to the Dirac-Fock energy. The M1 and E2 transitions between the first nine levels and the E1 transitions between excited and the ground levels are presented

  10. Computation of the Complex Probability Function

    Energy Technology Data Exchange (ETDEWEB)

    Trainer, Amelia Jo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ledwith, Patrick John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-22

    The complex probability function is important in many areas of physics and many techniques have been developed in an attempt to compute it for some z quickly and e ciently. Most prominent are the methods that use Gauss-Hermite quadrature, which uses the roots of the nth degree Hermite polynomial and corresponding weights to approximate the complex probability function. This document serves as an overview and discussion of the use, shortcomings, and potential improvements on the Gauss-Hermite quadrature for the complex probability function.

  11. Examining barrier distributions and, in extension, energy derivative of probabilities for surrogate experiments

    International Nuclear Information System (INIS)

    Romain, P.; Duarte, H.; Morillon, B.

    2012-01-01

    The energy derivatives of probabilities are functions suited to a best understanding of certain mechanisms. Applied to compound nuclear reactions, they can bring information on fusion barrier distributions as originally introduced, and also, as presented here, on fission barrier distributions and heights. Extendedly, they permit to access the compound nucleus spin-parity states preferentially populated according to an entrance channel, at a given energy. (authors)

  12. Survival probability in small angle scattering of low energy alkali ions from alkali covered metal surfaces

    International Nuclear Information System (INIS)

    Neskovic, N.; Ciric, D.; Perovic, B.

    1982-01-01

    The survival probability in small angle scattering of low energy alkali ions from alkali covered metal surfaces is considered. The model is based on the momentum approximation. The projectiles are K + ions and the target is the (001)Ni+K surface. The incident energy is 100 eV and the incident angle 5 0 . The interaction potential of the projectile and the target consists of the Born-Mayer, the dipole and the image charge potentials. The transition probability function corresponds to the resonant electron transition to the 4s projectile energy level. (orig.)

  13. A joint probability density function of wind speed and direction for wind energy analysis

    International Nuclear Information System (INIS)

    Carta, Jose A.; Ramirez, Penelope; Bueno, Celia

    2008-01-01

    A very flexible joint probability density function of wind speed and direction is presented in this paper for use in wind energy analysis. A method that enables angular-linear distributions to be obtained with specified marginal distributions has been used for this purpose. For the marginal distribution of wind speed we use a singly truncated from below Normal-Weibull mixture distribution. The marginal distribution of wind direction comprises a finite mixture of von Mises distributions. The proposed model is applied in this paper to wind direction and wind speed hourly data recorded at several weather stations located in the Canary Islands (Spain). The suitability of the distributions is judged from the coefficient of determination R 2 . The conclusions reached are that the joint distribution proposed in this paper: (a) can represent unimodal, bimodal and bitangential wind speed frequency distributions, (b) takes into account the frequency of null winds, (c) represents the wind direction regimes in zones with several modes or prevailing wind directions, (d) takes into account the correlation between wind speeds and its directions. It can therefore be used in several tasks involved in the evaluation process of the wind resources available at a potential site. We also conclude that, in the case of the Canary Islands, the proposed model provides better fits in all the cases analysed than those obtained with the models used in the specialised literature on wind energy

  14. Path probability of stochastic motion: A functional approach

    Science.gov (United States)

    Hattori, Masayuki; Abe, Sumiyoshi

    2016-06-01

    The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.

  15. A note on iterated function systems with discontinuous probabilities

    International Nuclear Information System (INIS)

    Jaroszewska, Joanna

    2013-01-01

    Highlights: ► Certain iterated function system with discontinuous probabilities is discussed. ► Existence of an invariant measure via the Schauder–Tychonov theorem is established. ► Asymptotic stability of the system under examination is proved. -- Abstract: We consider an example of an iterated function system with discontinuous probabilities. We prove that it posses an invariant probability measure. We also prove that it is asymptotically stable provided probabilities are positive

  16. Non-Maxwellian electron energy probability functions in the plume of a SPT-100 Hall thruster

    Science.gov (United States)

    Giono, G.; Gudmundsson, J. T.; Ivchenko, N.; Mazouffre, S.; Dannenmayer, K.; Loubère, D.; Popelier, L.; Merino, M.; Olentšenko, G.

    2018-01-01

    We present measurements of the electron density, the effective electron temperature, the plasma potential, and the electron energy probability function (EEPF) in the plume of a 1.5 kW-class SPT-100 Hall thruster, derived from cylindrical Langmuir probe measurements. The measurements were taken on the plume axis at distances between 550 and 1550 mm from the thruster exit plane, and at different angles from the plume axis at 550 mm for three operating points of the thruster, characterized by different discharge voltages and mass flow rates. The bulk of the electron population can be approximated as a Maxwellian distribution, but the measured distributions were seen to decline faster at higher energy. The measured EEPFs were best modelled with a general EEPF with an exponent α between 1.2 and 1.5, and their axial and angular characteristics were studied for the different operating points of the thruster. As a result, the exponent α from the fitted distribution was seen to be almost constant as a function of the axial distance along the plume, as well as across the angles. However, the exponent α was seen to be affected by the mass flow rate, suggesting a possible relationship with the collision rate, especially close to the thruster exit. The ratio of the specific heats, the γ factor, between the measured plasma parameters was found to be lower than the adiabatic value of 5/3 for each of the thruster settings, indicating the existence of non-trivial kinetic heat fluxes in the near collisionless plume. These results are intended to be used as input and/or testing properties for plume expansion models in further work.

  17. A fluctuation relation for the probability of energy backscatter

    Science.gov (United States)

    Vela-Martin, Alberto; Jimenez, Javier

    2017-11-01

    We simulate the large scales of an inviscid turbulent flow in a triply periodic box using a dynamic Smagorinsky model for the sub-grid stresses. The flow, which is forced to constant kinetic energy, is fully reversible and can develop a sustained inverse energy cascade. However, due to the large number of degrees freedom, the probability of spontaneous mean inverse energy flux is negligible. In order to quantify the probability of inverse energy cascades, we test a local fluctuation relation of the form log P(A) = - c(V , t) A , where P(A) = p(| Cs|V,t = A) / p(| Cs|V , t = - A) , p is probability, and | Cs|V,t is the average of the least-squared dynamic model coefficient over volume V and time t. This is confirmed when Cs is averaged over sufficiently large domains and long times, and c is found to depend linearly on V and t. In the limit in which V 1 / 3 is of the order of the integral scale and t is of the order of the eddy-turnover time, we recover a global fluctuation relation that predicts a negligible probability of a sustained inverse energy cascade. For smaller V and t, the local fluctuation relation provides useful predictions on the occurrence of local energy backscatter. Funded by the ERC COTURB project.

  18. Pairwise contact energy statistical potentials can help to find probability of point mutations.

    Science.gov (United States)

    Saravanan, K M; Suvaithenamudhan, S; Parthasarathy, S; Selvaraj, S

    2017-01-01

    To adopt a particular fold, a protein requires several interactions between its amino acid residues. The energetic contribution of these residue-residue interactions can be approximated by extracting statistical potentials from known high resolution structures. Several methods based on statistical potentials extracted from unrelated proteins are found to make a better prediction of probability of point mutations. We postulate that the statistical potentials extracted from known structures of similar folds with varying sequence identity can be a powerful tool to examine probability of point mutation. By keeping this in mind, we have derived pairwise residue and atomic contact energy potentials for the different functional families that adopt the (α/β) 8 TIM-Barrel fold. We carried out computational point mutations at various conserved residue positions in yeast Triose phosphate isomerase enzyme for which experimental results are already reported. We have also performed molecular dynamics simulations on a subset of point mutants to make a comparative study. The difference in pairwise residue and atomic contact energy of wildtype and various point mutations reveals probability of mutations at a particular position. Interestingly, we found that our computational prediction agrees with the experimental studies of Silverman et al. (Proc Natl Acad Sci 2001;98:3092-3097) and perform better prediction than i Mutant and Cologne University Protein Stability Analysis Tool. The present work thus suggests deriving pairwise contact energy potentials and molecular dynamics simulations of functionally important folds could help us to predict probability of point mutations which may ultimately reduce the time and cost of mutation experiments. Proteins 2016; 85:54-64. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  19. Proposal for Modified Damage Probability Distribution Functions

    DEFF Research Database (Denmark)

    Pedersen, Preben Terndrup; Hansen, Peter Friis

    1996-01-01

    Immidiately following the Estonia disaster, the Nordic countries establishe a project entitled "Safety of Passenger/RoRo Vessels" As part of this project the present proposal for modified damage stability probability distribution functions has been developed. and submitted to "Sub-committee on st......Immidiately following the Estonia disaster, the Nordic countries establishe a project entitled "Safety of Passenger/RoRo Vessels" As part of this project the present proposal for modified damage stability probability distribution functions has been developed. and submitted to "Sub...

  20. Probability of K atomic shell ionization by heavy particles impact, in functions of the scattering angle

    International Nuclear Information System (INIS)

    Oliveira, P.M.C. de.

    1976-12-01

    A method of calculation of the K atomic shell ionization probability by heavy particles impact, in the semi-classical approximation is presented. In this approximation, the projectile has a classical trajectory. The potential energy due to the projectile is taken as perturbation of the Hamiltonian of the neutral atom. We use scaled Thomas-Fermi wave function for the atomic electrons. The method is valid for intermediate atomic number elements and particle energies of some MeV. Probabilities are calculated for the case of Ag (Z = 47) and protons of 1 and 2 MeV. Results are given as function of scattering angle, and agree well known experimental data and also improve older calculations. (Author) [pt

  1. Decision making generalized by a cumulative probability weighting function

    Science.gov (United States)

    dos Santos, Lindomar Soares; Destefano, Natália; Martinez, Alexandre Souto

    2018-01-01

    Typical examples of intertemporal decision making involve situations in which individuals must choose between a smaller reward, but more immediate, and a larger one, delivered later. Analogously, probabilistic decision making involves choices between options whose consequences differ in relation to their probability of receiving. In Economics, the expected utility theory (EUT) and the discounted utility theory (DUT) are traditionally accepted normative models for describing, respectively, probabilistic and intertemporal decision making. A large number of experiments confirmed that the linearity assumed by the EUT does not explain some observed behaviors, as nonlinear preference, risk-seeking and loss aversion. That observation led to the development of new theoretical models, called non-expected utility theories (NEUT), which include a nonlinear transformation of the probability scale. An essential feature of the so-called preference function of these theories is that the probabilities are transformed by decision weights by means of a (cumulative) probability weighting function, w(p) . We obtain in this article a generalized function for the probabilistic discount process. This function has as particular cases mathematical forms already consecrated in the literature, including discount models that consider effects of psychophysical perception. We also propose a new generalized function for the functional form of w. The limiting cases of this function encompass some parametric forms already proposed in the literature. Far beyond a mere generalization, our function allows the interpretation of probabilistic decision making theories based on the assumption that individuals behave similarly in the face of probabilities and delays and is supported by phenomenological models.

  2. Probability functions in the context of signed involutive meadows

    NARCIS (Netherlands)

    Bergstra, J.A.; Ponse, A.

    2016-01-01

    The Kolmogorov axioms for probability functions are placed in the context of signed meadows. A completeness theorem is stated and proven for the resulting equational theory of probability calculus. Elementary definitions of probability theory are restated in this framework.

  3. Probability, conditional probability and complementary cumulative distribution functions in performance assessment for radioactive waste disposal

    International Nuclear Information System (INIS)

    Helton, J.C.

    1996-03-01

    A formal description of the structure of several recent performance assessments (PAs) for the Waste Isolation Pilot Plant (WIPP) is given in terms of the following three components: a probability space (S st , S st , p st ) for stochastic uncertainty, a probability space (S su , S su , p su ) for subjective uncertainty and a function (i.e., a random variable) defined on the product space associated with (S st , S st , p st ) and (S su , S su , p su ). The explicit recognition of the existence of these three components allows a careful description of the use of probability, conditional probability and complementary cumulative distribution functions within the WIPP PA. This usage is illustrated in the context of the U.S. Environmental Protection Agency's standard for the geologic disposal of radioactive waste (40 CFR 191, Subpart B). The paradigm described in this presentation can also be used to impose a logically consistent structure on PAs for other complex systems

  4. Probability, conditional probability and complementary cumulative distribution functions in performance assessment for radioactive waste disposal

    International Nuclear Information System (INIS)

    Helton, J.C.

    1996-01-01

    A formal description of the structure of several recent performance assessments (PAs) for the Waste Isolation Pilot Plant (WIPP) is given in terms of the following three components: a probability space (S st , L st , P st ) for stochastic uncertainty, a probability space (S su , L su , P su ) for subjective uncertainty and a function (i.e., a random variable) defined on the product space associated with (S st , L st , P st ) and (S su , L su , P su ). The explicit recognition of the existence of these three components allows a careful description of the use of probability, conditional probability and complementary cumulative distribution functions within the WIPP PA. This usage is illustrated in the context of the US Environmental Protection Agency's standard for the geologic disposal of radioactive waste (40 CFR 191, Subpart B). The paradigm described in this presentation can also be used to impose a logically consistent structure on PAs for other complex systems

  5. A method for ion distribution function evaluation using escaping neutral atom kinetic energy samples

    International Nuclear Information System (INIS)

    Goncharov, P.R.; Ozaki, T.; Veshchev, E.A.; Sudo, S.

    2008-01-01

    A reliable method to evaluate the probability density function for escaping atom kinetic energies is required for the analysis of neutral particle diagnostic data used to study the fast ion distribution function in fusion plasmas. Digital processing of solid state detector signals is proposed in this paper as an improvement of the simple histogram approach. Probability density function for kinetic energies of neutral particles escaping from the plasma has been derived in a general form taking into account the plasma ion energy distribution, electron capture and loss rates, superposition along the diagnostic sight line and the magnetic surface geometry. A pseudorandom number generator has been realized that enables a sample of escaping neutral particle energies to be simulated for given plasma parameters and experimental conditions. Empirical probability density estimation code has been developed and tested to reconstruct the probability density function from simulated samples assuming. Maxwellian and classical slowing down plasma ion energy distribution shapes for different temperatures and different slowing down times. The application of the developed probability density estimation code to the analysis of experimental data obtained by the novel Angular-Resolved Multi-Sightline Neutral Particle Analyzer has been studied to obtain the suprathermal particle distributions. The optimum bandwidth parameter selection algorithm has also been realized. (author)

  6. On Farmer's line, probability density functions, and overall risk

    International Nuclear Information System (INIS)

    Munera, H.A.; Yadigaroglu, G.

    1986-01-01

    Limit lines used to define quantitative probabilistic safety goals can be categorized according to whether they are based on discrete pairs of event sequences and associated probabilities, on probability density functions (pdf's), or on complementary cumulative density functions (CCDFs). In particular, the concept of the well-known Farmer's line and its subsequent reinterpretations is clarified. It is shown that Farmer's lines are pdf's and, therefore, the overall risk (defined as the expected value of the pdf) that they represent can be easily calculated. It is also shown that the area under Farmer's line is proportional to probability, while the areas under CCDFs are generally proportional to expected value

  7. Probability, conditional probability and complementary cumulative distribution functions in performance assessment for radioactive waste disposal

    Energy Technology Data Exchange (ETDEWEB)

    Helton, J.C. [Arizona State Univ., Tempe, AZ (United States)

    1996-03-01

    A formal description of the structure of several recent performance assessments (PAs) for the Waste Isolation Pilot Plant (WIPP) is given in terms of the following three components: a probability space (S{sub st}, S{sub st}, p{sub st}) for stochastic uncertainty, a probability space (S{sub su}, S{sub su}, p{sub su}) for subjective uncertainty and a function (i.e., a random variable) defined on the product space associated with (S{sub st}, S{sub st}, p{sub st}) and (S{sub su}, S{sub su}, p{sub su}). The explicit recognition of the existence of these three components allows a careful description of the use of probability, conditional probability and complementary cumulative distribution functions within the WIPP PA. This usage is illustrated in the context of the U.S. Environmental Protection Agency`s standard for the geologic disposal of radioactive waste (40 CFR 191, Subpart B). The paradigm described in this presentation can also be used to impose a logically consistent structure on PAs for other complex systems.

  8. Energy density functional analysis of shape coexistence in 44S

    International Nuclear Information System (INIS)

    Li, Z. P.; Yao, J. M.; Vretenar, D.; Nikšić, T.; Meng, J.

    2012-01-01

    The structure of low-energy collective states in the neutron-rich nucleus 44 S is analyzed using a microscopic collective Hamiltonian model based on energy density functionals (EDFs). The calculated triaxial energy map, low-energy spectrum and corresponding probability distributions indicate a coexistence of prolate and oblate shapes in this nucleus.

  9. A generalized electron energy probability function for inductively coupled plasmas under conditions of nonlocal electron kinetics

    Science.gov (United States)

    Mouchtouris, S.; Kokkoris, G.

    2018-01-01

    A generalized equation for the electron energy probability function (EEPF) of inductively coupled Ar plasmas is proposed under conditions of nonlocal electron kinetics and diffusive cooling. The proposed equation describes the local EEPF in a discharge and the independent variable is the kinetic energy of electrons. The EEPF consists of a bulk and a depleted tail part and incorporates the effect of the plasma potential, Vp, and pressure. Due to diffusive cooling, the break point of the EEPF is eVp. The pressure alters the shape of the bulk and the slope of the tail part. The parameters of the proposed EEPF are extracted by fitting to measure EEPFs (at one point in the reactor) at different pressures. By coupling the proposed EEPF with a hybrid plasma model, measurements in the gaseous electronics conference reference reactor concerning (a) the electron density and temperature and the plasma potential, either spatially resolved or at different pressure (10-50 mTorr) and power, and (b) the ion current density of the electrode, are well reproduced. The effect of the choice of the EEPF on the results is investigated by a comparison to an EEPF coming from the Boltzmann equation (local electron kinetics approach) and to a Maxwellian EEPF. The accuracy of the results and the fact that the proposed EEPF is predefined renders its use a reliable alternative with a low computational cost compared to stochastic electron kinetic models at low pressure conditions, which can be extended to other gases and/or different electron heating mechanisms.

  10. Wigner function and the probability representation of quantum states

    Directory of Open Access Journals (Sweden)

    Man’ko Margarita A.

    2014-01-01

    Full Text Available The relation of theWigner function with the fair probability distribution called tomographic distribution or quantum tomogram associated with the quantum state is reviewed. The connection of the tomographic picture of quantum mechanics with the integral Radon transform of the Wigner quasidistribution is discussed. The Wigner–Moyal equation for the Wigner function is presented in the form of kinetic equation for the tomographic probability distribution both in quantum mechanics and in the classical limit of the Liouville equation. The calculation of moments of physical observables in terms of integrals with the state tomographic probability distributions is constructed having a standard form of averaging in the probability theory. New uncertainty relations for the position and momentum are written in terms of optical tomograms suitable for directexperimental check. Some recent experiments on checking the uncertainty relations including the entropic uncertainty relations are discussed.

  11. Discriminating Among Probability Weighting Functions Using Adaptive Design Optimization

    Science.gov (United States)

    Cavagnaro, Daniel R.; Pitt, Mark A.; Gonzalez, Richard; Myung, Jay I.

    2014-01-01

    Probability weighting functions relate objective probabilities and their subjective weights, and play a central role in modeling choices under risk within cumulative prospect theory. While several different parametric forms have been proposed, their qualitative similarities make it challenging to discriminate among them empirically. In this paper, we use both simulation and choice experiments to investigate the extent to which different parametric forms of the probability weighting function can be discriminated using adaptive design optimization, a computer-based methodology that identifies and exploits model differences for the purpose of model discrimination. The simulation experiments show that the correct (data-generating) form can be conclusively discriminated from its competitors. The results of an empirical experiment reveal heterogeneity between participants in terms of the functional form, with two models (Prelec-2, Linear in Log Odds) emerging as the most common best-fitting models. The findings shed light on assumptions underlying these models. PMID:24453406

  12. Escape probabilities for fluorescent x-rays

    International Nuclear Information System (INIS)

    Dance, D.R.; Day, G.J.

    1985-01-01

    Computation of the energy absorption efficiency of an x-ray photon detector involves consideration of the histories of the secondary particles produced in any initial or secondary interaction which may occur within the detector. In particular, the K or higher shell fluorescent x-rays which may be emitted following a photoelectric interaction can carry away a large fraction of the energy of the incident photon, especially if this energy is just above an absorption edge. The effects of such photons cannot be ignored and a correction term, depending upon the probability that the fluorescent x-rays will escape from the detector, must be applied to the energy absorption efficiency. For detectors such as x-ray intensifying screens, it has been usual to calculate this probability by numerical integration. In this note analytic expressions are derived for the escape probability of fluorescent photons from planar detectors in terms of exponential integral functions. Rational approximations for these functions are readily available and these analytic expressions therefore facilitate the computation of photon absorption efficiencies. A table is presented which should obviate the need for calculating the escape probability for most cases of interest. (author)

  13. Research on Energy-Saving Design of Overhead Travelling Crane Camber Based on Probability Load Distribution

    Directory of Open Access Journals (Sweden)

    Tong Yifei

    2014-01-01

    Full Text Available Crane is a mechanical device, used widely to move materials in modern production. It is reported that the energy consumptions of China are at least 5–8 times of other developing countries. Thus, energy consumption becomes an unavoidable topic. There are several reasons influencing the energy loss, and the camber of the girder is the one not to be neglected. In this paper, the problem of the deflections induced by the moving payload in the girder of overhead travelling crane is examined. The evaluation of a camber giving a counterdeflection of the girder is proposed in order to get minimum energy consumptions for trolley to move along a nonstraight support. To this aim, probabilistic payload distributions are considered instead of fixed or rated loads involved in other researches. Taking 50/10 t bridge crane as a research object, the probability loads are determined by analysis of load distribution density functions. According to load distribution, camber design under different probability loads is discussed in detail as well as energy consumptions distribution. The research results provide the design reference of reasonable camber to obtain the least energy consumption for climbing corresponding to different P0; thus energy-saving design can be achieved.

  14. Transition probabilities and dissociation energies of MnH and MnD molecules

    International Nuclear Information System (INIS)

    Nagarajan, K.; Rajamanickam, N.

    1997-01-01

    The Frank-Condon factors (vibrational transition probabilities) and r-centroids have been evaluated by the more reliable numerical integration procedure for the bands of A-X system of MnH and MnD molecules, using a suitable potential. By fitting the Hulburt- Hirschfelder function to the experimental potential curve using correlation coefficient, the dissociation energy for the electronic ground states of MnH and MnD molecules, respectively have been estimated as D 0 0 =251±5 KJ.mol -1 and D 0 0 =312±6 KJ.mol -1 . (authors)

  15. Probability function of breaking-limited surface elevation. [wind generated waves of ocean

    Science.gov (United States)

    Tung, C. C.; Huang, N. E.; Yuan, Y.; Long, S. R.

    1989-01-01

    The effect of wave breaking on the probability function of surface elevation is examined. The surface elevation limited by wave breaking zeta sub b(t) is first related to the original wave elevation zeta(t) and its second derivative. An approximate, second-order, nonlinear, non-Gaussian model for zeta(t) of arbitrary but moderate bandwidth is presented, and an expression for the probability density function zeta sub b(t) is derived. The results show clearly that the effect of wave breaking on the probability density function of surface elevation is to introduce a secondary hump on the positive side of the probability density function, a phenomenon also observed in wind wave tank experiments.

  16. Assembly for the measurement of the most probable energy of directed electron radiation

    International Nuclear Information System (INIS)

    Geske, G.

    1987-01-01

    This invention relates to a setup for the measurement of the most probable energy of directed electron radiation up to 50 MeV. The known energy-range relationship with regard to the absorption of electron radiation in matter is utilized by an absorber with two groups of interconnected radiation detectors embedded in it. The most probable electron beam energy is derived from the quotient of both groups' signals

  17. Systematics of the breakup probability function for {sup 6}Li and {sup 7}Li projectiles

    Energy Technology Data Exchange (ETDEWEB)

    Capurro, O.A., E-mail: capurro@tandar.cnea.gov.ar [Laboratorio TANDAR, Comisión Nacional de Energía Atómica, Av. General Paz 1499, B1650KNA San Martín, Buenos Aires (Argentina); Pacheco, A.J.; Arazi, A. [Laboratorio TANDAR, Comisión Nacional de Energía Atómica, Av. General Paz 1499, B1650KNA San Martín, Buenos Aires (Argentina); CONICET, Av. Rivadavia 1917, C1033AAJ Buenos Aires (Argentina); Carnelli, P.F.F. [CONICET, Av. Rivadavia 1917, C1033AAJ Buenos Aires (Argentina); Instituto de Investigación e Ingeniería Ambiental, Universidad Nacional de San Martín, 25 de Mayo y Francia, B1650BWA San Martín, Buenos Aires (Argentina); Fernández Niello, J.O. [Laboratorio TANDAR, Comisión Nacional de Energía Atómica, Av. General Paz 1499, B1650KNA San Martín, Buenos Aires (Argentina); CONICET, Av. Rivadavia 1917, C1033AAJ Buenos Aires (Argentina); Instituto de Investigación e Ingeniería Ambiental, Universidad Nacional de San Martín, 25 de Mayo y Francia, B1650BWA San Martín, Buenos Aires (Argentina); and others

    2016-01-15

    Experimental non-capture breakup cross sections can be used to determine the probability of projectile and ejectile fragmentation in nuclear reactions involving weakly bound nuclei. Recently, the probability of both type of dissociations has been analyzed in nuclear reactions involving {sup 9}Be projectiles onto various heavy targets at sub-barrier energies. In the present work we extend this kind of systematic analysis to the case of {sup 6}Li and {sup 7}Li projectiles with the purpose of investigating general features of projectile-like breakup probabilities for reactions induced by stable weakly bound nuclei. For that purpose we have obtained the probabilities of projectile and ejectile breakup for a large number of systems, starting from a compilation of the corresponding reported non-capture breakup cross sections. We parametrize the results in accordance with the previous studies for the case of beryllium projectiles, and we discuss their systematic behavior as a function of the projectile, the target mass and the reaction Q-value.

  18. The force distribution probability function for simple fluids by density functional theory.

    Science.gov (United States)

    Rickayzen, G; Heyes, D M

    2013-02-28

    Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.

  19. Probability-density-function characterization of multipartite entanglement

    International Nuclear Information System (INIS)

    Facchi, P.; Florio, G.; Pascazio, S.

    2006-01-01

    We propose a method to characterize and quantify multipartite entanglement for pure states. The method hinges upon the study of the probability density function of bipartite entanglement and is tested on an ensemble of qubits in a variety of situations. This characterization is also compared to several measures of multipartite entanglement

  20. Optimizing an objective function under a bivariate probability model

    NARCIS (Netherlands)

    X. Brusset; N.M. Temme (Nico)

    2007-01-01

    htmlabstractThe motivation of this paper is to obtain an analytical closed form of a quadratic objective function arising from a stochastic decision process with bivariate exponential probability distribution functions that may be dependent. This method is applicable when results need to be

  1. Structure functions are not parton probabilities

    International Nuclear Information System (INIS)

    Brodsky, Stanley J.; Hoyer, Paul; Sannino, Francesco; Marchal, Nils; Peigne, Stephane

    2002-01-01

    The common view that structure functions measured in deep inelastic lepton scattering are determined by the probability of finding quarks and gluons in the target is not correct in gauge theory. We show that gluon exchange between the fast, outgoing partons and target spectators, which is usually assumed to be an irrelevant gauge artifact, affects the leading twist structure functions in a profound way. This observation removes the apparent contradiction between the projectile (eikonal) and target (parton model) views of diffractive and small x B phenomena. The diffractive scattering of the fast outgoing quarks on spectators in the target causes shadowing in the DIS cross section. Thus the depletion of the nuclear structure functions is not intrinsic to the wave function of the nucleus, but is a coherent effect arising from the destructive interference of diffractive channels induced by final state interactions. This is consistent with the Glauber-Gribov interpretation of shadowing as a rescattering effect

  2. Prospect evaluation as a function of numeracy and probability denominator.

    Science.gov (United States)

    Millroth, Philip; Juslin, Peter

    2015-05-01

    This study examines how numeracy and probability denominator (a direct-ratio probability, a relative frequency with denominator 100, a relative frequency with denominator 10,000) affect the evaluation of prospects in an expected-value based pricing task. We expected that numeracy would affect the results due to differences in the linearity of number perception and the susceptibility to denominator neglect with different probability formats. An analysis with functional measurement verified that participants integrated value and probability into an expected value. However, a significant interaction between numeracy and probability format and subsequent analyses of the parameters of cumulative prospect theory showed that the manipulation of probability denominator changed participants' psychophysical response to probability and value. Standard methods in decision research may thus confound people's genuine risk attitude with their numerical capacities and the probability format used. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Visualization techniques for spatial probability density function data

    Directory of Open Access Journals (Sweden)

    Udeepta D Bordoloi

    2006-01-01

    Full Text Available Novel visualization methods are presented for spatial probability density function data. These are spatial datasets, where each pixel is a random variable, and has multiple samples which are the results of experiments on that random variable. We use clustering as a means to reduce the information contained in these datasets; and present two different ways of interpreting and clustering the data. The clustering methods are used on two datasets, and the results are discussed with the help of visualization techniques designed for the spatial probability data.

  4. INVESTIGATION OF INFLUENCE OF ENCODING FUNCTION COMPLEXITY ON DISTRIBUTION OF ERROR MASKING PROBABILITY

    Directory of Open Access Journals (Sweden)

    A. B. Levina

    2016-03-01

    Full Text Available Error detection codes are mechanisms that enable robust delivery of data in unreliable communication channels and devices. Unreliable channels and devices are error-prone objects. Respectively, error detection codes allow detecting such errors. There are two classes of error detecting codes - classical codes and security-oriented codes. The classical codes have high percentage of detected errors; however, they have a high probability to miss an error in algebraic manipulation. In order, security-oriented codes are codes with a small Hamming distance and high protection to algebraic manipulation. The probability of error masking is a fundamental parameter of security-oriented codes. A detailed study of this parameter allows analyzing the behavior of the error-correcting code in the case of error injection in the encoding device. In order, the complexity of the encoding function plays an important role in the security-oriented codes. Encoding functions with less computational complexity and a low probability of masking are the best protection of encoding device against malicious acts. This paper investigates the influence of encoding function complexity on the error masking probability distribution. It will be shownthat the more complex encoding function reduces the maximum of error masking probability. It is also shown in the paper that increasing of the function complexity changes the error masking probability distribution. In particular, increasing of computational complexity decreases the difference between the maximum and average value of the error masking probability. Our resultshave shown that functions with greater complexity have smoothed maximums of error masking probability, which significantly complicates the analysis of error-correcting code by attacker. As a result, in case of complex encoding function the probability of the algebraic manipulation is reduced. The paper discusses an approach how to measure the error masking

  5. Low-lying electronic states of the OH radical: potential energy curves, dipole moment functions, and transition probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Qin, X.; Zhang, S. D. [Qufu Normal University, Qufu (China)

    2014-12-15

    The six doublet and the two quartet electronic states ({sup 2}Σ{sup +}(2), {sup 2}Σ{sup -}, {sup 2}Π(2), {sup 2}Δ, {sup 4}Σ{sup -}, and {sup 4}Π) of the OH radical have been studied using the multi-reference configuration interaction (MRCI) method where the Davidson correction, core-valence interaction and relativistic effect are considered with large basis sets of aug-cc-pv5z, aug-cc-pcv5z, and cc-pv5z-DK, respectively. Potential energy curves (PECs) and dipole moment functions are also calculated for these states for internuclear distances ranging from 0.05 nm to 0.80 nm. All possible vibrational levels and rotational constants for the bound state X{sup 2}Π and A{sup 2}Σ{sup +} of OH are predicted by numerical solving the radial Schroedinger equation through the Level program, and spectroscopic parameters, which are in good agreements with experimental results, are obtained. Transition dipole moments between the ground state X{sup 2}Π and other excited states are also computed using MRCI, and the transition probability, lifetime, and Franck-Condon factors for the A{sup 2}Σ{sup +} - X{sup 2}Π transition are discussed and compared with existing experimental values.

  6. Uncertainty plus prior equals rational bias: an intuitive Bayesian probability weighting function.

    Science.gov (United States)

    Fennell, John; Baddeley, Roland

    2012-10-01

    Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several nonexpected utility theories, including rank-dependent models and prospect theory; here, we propose a Bayesian approach to the probability weighting function and, with it, a psychological rationale. In the real world, uncertainty is ubiquitous and, accordingly, the optimal strategy is to combine probability statements with prior information using Bayes' rule. First, we show that any reasonable prior on probabilities leads to 2 of the observed effects; overweighting of low probabilities and underweighting of high probabilities. We then investigate 2 plausible kinds of priors: informative priors based on previous experience and uninformative priors of ignorance. Individually, these priors potentially lead to large problems of bias and inefficiency, respectively; however, when combined using Bayesian model comparison methods, both forms of prior can be applied adaptively, gaining the efficiency of empirical priors and the robustness of ignorance priors. We illustrate this for the simple case of generic good and bad options, using Internet blogs to estimate the relevant priors of inference. Given this combined ignorant/informative prior, the Bayesian probability weighting function is not only robust and efficient but also matches all of the major characteristics of the distortions found in empirical research. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  7. Entanglement probabilities of polymers: a white noise functional approach

    International Nuclear Information System (INIS)

    Bernido, Christopher C; Carpio-Bernido, M Victoria

    2003-01-01

    The entanglement probabilities for a highly flexible polymer to wind n times around a straight polymer are evaluated using white noise analysis. To introduce the white noise functional approach, the one-dimensional random walk problem is taken as an example. The polymer entanglement scenario, viewed as a random walk on a plane, is then treated and the entanglement probabilities are obtained for a magnetic flux confined along the straight polymer, and a case where an entangled polymer is subjected to the potential V = f-dot(s)θ. In the absence of the magnetic flux and the potential V, the entanglement probabilities reduce to a result obtained by Wiegel

  8. INTERACTIVE VISUALIZATION OF PROBABILITY AND CUMULATIVE DENSITY FUNCTIONS

    KAUST Repository

    Potter, Kristin; Kirby, Robert Michael; Xiu, Dongbin; Johnson, Chris R.

    2012-01-01

    The probability density function (PDF), and its corresponding cumulative density function (CDF), provide direct statistical insight into the characterization of a random process or field. Typically displayed as a histogram, one can infer probabilities of the occurrence of particular events. When examining a field over some two-dimensional domain in which at each point a PDF of the function values is available, it is challenging to assess the global (stochastic) features present within the field. In this paper, we present a visualization system that allows the user to examine two-dimensional data sets in which PDF (or CDF) information is available at any position within the domain. The tool provides a contour display showing the normed difference between the PDFs and an ansatz PDF selected by the user and, furthermore, allows the user to interactively examine the PDF at any particular position. Canonical examples of the tool are provided to help guide the reader into the mapping of stochastic information to visual cues along with a description of the use of the tool for examining data generated from an uncertainty quantification exercise accomplished within the field of electrophysiology.

  9. Uncertainty plus Prior Equals Rational Bias: An Intuitive Bayesian Probability Weighting Function

    Science.gov (United States)

    Fennell, John; Baddeley, Roland

    2012-01-01

    Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several…

  10. Modulation Based on Probability Density Functions

    Science.gov (United States)

    Williams, Glenn L.

    2009-01-01

    A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.

  11. 77 FR 5711 - Guidelines for Determining Probability of Causation Under the Energy Employees Occupational...

    Science.gov (United States)

    2012-02-06

    ... Guidelines for Determining Probability of Causation Under the Energy Employees Occupational Illness... provide a technical review of a proposed amendment to the probability of causation guidelines.\\2\\ All of..., and hence had required DOL to assign a probability of causation value of ``zero.'' There were two...

  12. Numerical Loading of a Maxwellian Probability Distribution Function

    International Nuclear Information System (INIS)

    Lewandowski, J.L.V.

    2003-01-01

    A renormalization procedure for the numerical loading of a Maxwellian probability distribution function (PDF) is formulated. The procedure, which involves the solution of three coupled nonlinear equations, yields a numerically loaded PDF with improved properties for higher velocity moments. This method is particularly useful for low-noise particle-in-cell simulations with electron dynamics

  13. ERF/ERFC, Calculation of Error Function, Complementary Error Function, Probability Integrals

    International Nuclear Information System (INIS)

    Vogel, J.E.

    1983-01-01

    1 - Description of problem or function: ERF and ERFC are used to compute values of the error function and complementary error function for any real number. They may be used to compute other related functions such as the normal probability integrals. 4. Method of solution: The error function and complementary error function are approximated by rational functions. Three such rational approximations are used depending on whether - x .GE.4.0. In the first region the error function is computed directly and the complementary error function is computed via the identity erfc(x)=1.0-erf(x). In the other two regions the complementary error function is computed directly and the error function is computed from the identity erf(x)=1.0-erfc(x). The error function and complementary error function are real-valued functions of any real argument. The range of the error function is (-1,1). The range of the complementary error function is (0,2). 5. Restrictions on the complexity of the problem: The user is cautioned against using ERF to compute the complementary error function by using the identity erfc(x)=1.0-erf(x). This subtraction may cause partial or total loss of significance for certain values of x

  14. Collision Probability Analysis

    DEFF Research Database (Denmark)

    Hansen, Peter Friis; Pedersen, Preben Terndrup

    1998-01-01

    It is the purpose of this report to apply a rational model for prediction of ship-ship collision probabilities as function of the ship and the crew characteristics and the navigational environment for MS Dextra sailing on a route between Cadiz and the Canary Islands.The most important ship and crew...... characteristics are: ship speed, ship manoeuvrability, the layout of the navigational bridge, the radar system, the number and the training of navigators, the presence of a look out etc. The main parameters affecting the navigational environment are ship traffic density, probability distributions of wind speeds...... probability, i.e. a study of the navigator's role in resolving critical situations, a causation factor is derived as a second step.The report documents the first step in a probabilistic collision damage analysis. Future work will inlcude calculation of energy released for crushing of structures giving...

  15. Blue functions: probability and current density propagators in non-relativistic quantum mechanics

    International Nuclear Information System (INIS)

    Withers, L P Jr

    2011-01-01

    Like a Green function to propagate a particle's wavefunction in time, a Blue function is introduced to propagate the particle's probability and current density. Accordingly, the complete Blue function has four components. They are constructed from path integrals involving a quantity like the action that we call the motion. The Blue function acts on the displaced probability density as the kernel of an integral operator. As a result, we find that the Wigner density occurs as an expression for physical propagation. We also show that, in quantum mechanics, the displaced current density is conserved bilocally (in two places at one time), as expressed by a generalized continuity equation. (paper)

  16. Most probable degree distribution at fixed structural entropy

    Indian Academy of Sciences (India)

    Here we derive the most probable degree distribution emerging ... the structural entropy of power-law networks is an increasing function of the expo- .... tition function Z of the network as the sum over all degree distributions, with given energy.

  17. Estimation of functional failure probability of passive systems based on subset simulation method

    International Nuclear Information System (INIS)

    Wang Dongqing; Wang Baosheng; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to solve the problem of multi-dimensional epistemic uncertainties and small functional failure probability of passive systems, an innovative reliability analysis algorithm called subset simulation based on Markov chain Monte Carlo was presented. The method is found on the idea that a small failure probability can be expressed as a product of larger conditional failure probabilities by introducing a proper choice of intermediate failure events. Markov chain Monte Carlo simulation was implemented to efficiently generate conditional samples for estimating the conditional failure probabilities. Taking the AP1000 passive residual heat removal system, for example, the uncertainties related to the model of a passive system and the numerical values of its input parameters were considered in this paper. And then the probability of functional failure was estimated with subset simulation method. The numerical results demonstrate that subset simulation method has the high computing efficiency and excellent computing accuracy compared with traditional probability analysis methods. (authors)

  18. Fitting the Probability Distribution Functions to Model Particulate Matter Concentrations

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh.I.

    2017-01-01

    The main objective of this study is to identify the best probability distribution and the plotting position formula for modeling the concentrations of Total Suspended Particles (TSP) as well as the Particulate Matter with an aerodynamic diameter<10 μm (PM 10 ). The best distribution provides the estimated probabilities that exceed the threshold limit given by the Egyptian Air Quality Limit value (EAQLV) as well the number of exceedance days is estimated. The standard limits of the EAQLV for TSP and PM 10 concentrations are 24-h average of 230 μg/m 3 and 70 μg/m 3 , respectively. Five frequency distribution functions with seven formula of plotting positions (empirical cumulative distribution functions) are compared to fit the average of daily TSP and PM 10 concentrations in year 2014 for Ain Sokhna city. The Quantile-Quantile plot (Q-Q plot) is used as a method for assessing how closely a data set fits a particular distribution. A proper probability distribution that represents the TSP and PM 10 has been chosen based on the statistical performance indicator values. The results show that Hosking and Wallis plotting position combined with Frechet distribution gave the highest fit for TSP and PM 10 concentrations. Burr distribution with the same plotting position follows Frechet distribution. The exceedance probability and days over the EAQLV are predicted using Frechet distribution. In 2014, the exceedance probability and days for TSP concentrations are 0.052 and 19 days, respectively. Furthermore, the PM 10 concentration is found to exceed the threshold limit by 174 days

  19. Impact of proof test interval and coverage on probability of failure of safety instrumented function

    International Nuclear Information System (INIS)

    Jin, Jianghong; Pang, Lei; Hu, Bin; Wang, Xiaodong

    2016-01-01

    Highlights: • Introduction of proof test coverage makes the calculation of the probability of failure for SIF more accurate. • The probability of failure undetected by proof test is independently defined as P TIF and calculated. • P TIF is quantified using reliability block diagram and simple formula of PFD avg . • Improving proof test coverage and adopting reasonable test period can reduce the probability of failure for SIF. - Abstract: Imperfection of proof test can result in the safety function failure of safety instrumented system (SIS) at any time in its life period. IEC61508 and other references ignored or only elementarily analyzed the imperfection of proof test. In order to further study the impact of the imperfection of proof test on the probability of failure for safety instrumented function (SIF), the necessity of proof test and influence of its imperfection on system performance was first analyzed theoretically. The probability of failure for safety instrumented function resulted from the imperfection of proof test was defined as probability of test independent failures (P TIF ), and P TIF was separately calculated by introducing proof test coverage and adopting reliability block diagram, with reference to the simplified calculation formula of average probability of failure on demand (PFD avg ). Research results show that: the shorter proof test period and the higher proof test coverage indicate the smaller probability of failure for safety instrumented function. The probability of failure for safety instrumented function which is calculated by introducing proof test coverage will be more accurate.

  20. Quantum probability measures and tomographic probability densities

    NARCIS (Netherlands)

    Amosov, GG; Man'ko, [No Value

    2004-01-01

    Using a simple relation of the Dirac delta-function to generalized the theta-function, the relationship between the tomographic probability approach and the quantum probability measure approach with the description of quantum states is discussed. The quantum state tomogram expressed in terms of the

  1. Diagnostic probability function for acute coronary heart disease garnered from experts' tacit knowledge.

    Science.gov (United States)

    Steurer, Johann; Held, Ulrike; Miettinen, Olli S

    2013-11-01

    Knowing about a diagnostic probability requires general knowledge about the way in which the probability depends on the diagnostic indicators involved in the specification of the case at issue. Diagnostic probability functions (DPFs) are generally unavailable at present. Our objective was to illustrate how diagnostic experts' case-specific tacit knowledge about diagnostic probabilities could be garnered in the form of DPFs. Focusing on diagnosis of acute coronary heart disease (ACHD), we presented doctors with extensive experience in hospitals' emergency departments a set of hypothetical cases specified in terms of an inclusive set of diagnostic indicators. We translated the medians of these experts' case-specific probabilities into a logistic DPF for ACHD. The principal result was the experts' typical diagnostic probability for ACHD as a joint function of the set of diagnostic indicators. A related result of note was the finding that the experts' probabilities in any given case had a surprising degree of variability. Garnering diagnostic experts' case-specific tacit knowledge about diagnostic probabilities in the form of DPFs is feasible to accomplish. Thus, once the methodology of this type of work has been "perfected," practice-guiding diagnostic expert systems can be developed. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Theoretical Study of Energy Levels and Transition Probabilities of Boron Atom

    Science.gov (United States)

    Tian Yi, Zhang; Neng Wu, Zheng

    2009-08-01

    Full Text PDF Though the electrons configuration for boron atom is simple and boron atom has long been of interest for many researchers, the theoretical studies for properties of BI are not systematic, there are only few results reported on energy levels of high excited states of boron, and transition measurements are generally restricted to transitions involving ground states and low excited states without considering fine structure effects, provided only multiplet results, values for transitions between high excited states are seldom performed. In this article, by using the scheme of the weakest bound electron potential model theory calculations for energy levels of five series are performed and with the same method we give the transition probabilities between excited states with considering fine structure effects. The comprehensive set of calculations attempted in this paper could be of some value to workers in the field because of the lack of published calculations for the BI systems. The perturbations coming from foreign perturbers are taken into account in studying the energy levels. Good agreement between our results and the accepted values taken from NIST has been obtained. We also reported some values of energy levels and transition probabilities not existing on the NIST data bases.

  3. Modelling the Probability Density Function of IPTV Traffic Packet Delay Variation

    Directory of Open Access Journals (Sweden)

    Michal Halas

    2012-01-01

    Full Text Available This article deals with modelling the Probability density function of IPTV traffic packet delay variation. The use of this modelling is in an efficient de-jitter buffer estimation. When an IP packet travels across a network, it experiences delay and its variation. This variation is caused by routing, queueing systems and other influences like the processing delay of the network nodes. When we try to separate these at least three types of delay variation, we need a way to measure these types separately. This work is aimed to the delay variation caused by queueing systems which has the main implications to the form of the Probability density function.

  4. 76 FR 36891 - Guidelines for Determining Probability of Causation Under the Energy Employees Occupational...

    Science.gov (United States)

    2011-06-23

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES 42 CFR Part 81 [Docket Number NIOSH-0209] RIN 0920-AA39 Guidelines for Determining Probability of Causation Under the Energy Employees Occupational Illness...: HHS published a proposed rule entitled ``Guidelines for Determining Probability of Causation Under the...

  5. Relationship between the Wigner function and the probability density function in quantum phase space representation

    International Nuclear Information System (INIS)

    Li Qianshu; Lue Liqiang; Wei Gongmin

    2004-01-01

    This paper discusses the relationship between the Wigner function, along with other related quasiprobability distribution functions, and the probability density distribution function constructed from the wave function of the Schroedinger equation in quantum phase space, as formulated by Torres-Vega and Frederick (TF). At the same time, a general approach in solving the wave function of the Schroedinger equation of TF quantum phase space theory is proposed. The relationship of the wave functions between the TF quantum phase space representation and the coordinate or momentum representation is thus revealed

  6. Total reflection coefficients of low-energy photons presented as universal functions

    Directory of Open Access Journals (Sweden)

    Ljubenov Vladan

    2010-01-01

    Full Text Available The possibility of expressing the total particle and energy reflection coefficients of low-energy photons in the form of universal functions valid for different shielding materials is investigated in this paper. The analysis is based on the results of Monte Carlo simulations of photon reflection by using MCNP, FOTELP, and PENELOPE codes. The normal incidence of the narrow monoenergetic photon beam of the unit intensity and of initial energies from 20 keV up to 100 keV is considered, and particle and energy reflection coefficients from the plane homogenous targets of water, aluminum, and iron are determined and compared. The representations of albedo coefficients on the initial photon energy, on the probability of large-angle photon scattering, and on the mean number of photon scatterings are examined. It is found out that only the rescaled albedo coefficients dependent on the mean number of photon scatterings have the form of universal functions and these functions are determined by applying the least square method.

  7. Probability laws related to the Jacobi theta and Riemann zeta function and Brownian excursions

    OpenAIRE

    Biane, P.; Pitman, J.; Yor, M.

    1999-01-01

    This paper reviews known results which connect Riemann's integral representations of his zeta function, involving Jacobi's theta function and its derivatives, to some particular probability laws governing sums of independent exponential variables. These laws are related to one-dimensional Brownian motion and to higher dimensional Bessel processes. We present some characterizations of these probability laws, and some approximations of Riemann's zeta function which are related to these laws.

  8. Charged-particle thermonuclear reaction rates: II. Tables and graphs of reaction rates and probability density functions

    International Nuclear Information System (INIS)

    Iliadis, C.; Longland, R.; Champagne, A.E.; Coc, A.; Fitzgerald, R.

    2010-01-01

    Numerical values of charged-particle thermonuclear reaction rates for nuclei in the A=14 to 40 region are tabulated. The results are obtained using a method, based on Monte Carlo techniques, that has been described in the preceding paper of this issue (Paper I). We present a low rate, median rate and high rate which correspond to the 0.16, 0.50 and 0.84 quantiles, respectively, of the cumulative reaction rate distribution. The meaning of these quantities is in general different from the commonly reported, but statistically meaningless expressions, 'lower limit', 'nominal value' and 'upper limit' of the total reaction rate. In addition, we approximate the Monte Carlo probability density function of the total reaction rate by a lognormal distribution and tabulate the lognormal parameters μ and σ at each temperature. We also provide a quantitative measure (Anderson-Darling test statistic) for the reliability of the lognormal approximation. The user can implement the approximate lognormal reaction rate probability density functions directly in a stellar model code for studies of stellar energy generation and nucleosynthesis. For each reaction, the Monte Carlo reaction rate probability density functions, together with their lognormal approximations, are displayed graphically for selected temperatures in order to provide a visual impression. Our new reaction rates are appropriate for bare nuclei in the laboratory. The nuclear physics input used to derive our reaction rates is presented in the subsequent paper of this issue (Paper III). In the fourth paper of this issue (Paper IV) we compare our new reaction rates to previous results.

  9. Probability of Interference-Optimal and Energy-Efficient Analysis for Topology Control in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Ning Li

    2016-11-01

    Full Text Available Because wireless sensor networks (WSNs have been widely used in recent years, how to reduce their energy consumption and interference has become a major issue. Topology control is a common and effective approach to improve network performance, such as reducing the energy consumption and network interference, improving the network connectivity, etc. Many topology control algorithms reduce network interference by dynamically adjusting the node transmission range. However, reducing the network interference by adjusting the transmission range is probabilistic. Therefore, in this paper, we analyze the probability of interference-optimality for the WSNs and prove that the probability of interference-optimality increases with the increasing of the original transmission range. Under a specific transmission range, the probability reaches the maximum value when the transmission range is 0.85r in homogeneous networks and 0.84r in heterogeneous networks. In addition, we also prove that when the network is energy-efficient, the network is also interference-optimal with probability 1 both in the homogeneous and heterogeneous networks.

  10. Probability of spin flipping of proton with energy 6.9 MeV at inelastic scattering with sup(54,56)Fe nuclei

    International Nuclear Information System (INIS)

    Prokopenko, V.S.; Sklyarenko, V.; Chernievskij, V.K.; Shustov, A.V.

    1980-01-01

    Spin-orbital effects of inelastic scattering of protons by nuclei with mean atomic weight are investigated along with the mechanisms of the reaction course by measuring proton spin flip. The experiment consists in measuring proton-gamma coincidences in mutually perpendicular planes by the technique of quick-slow coincidences. The excitation function of the 56 Fe(P,P 1 ) reaction is measured in the 3.5-6.2 MeV energy range. Angular dependences of probability of proton spin flip (a level of 2 + , 0.847 MeV) are measured at energies of incident protons of 4.96; 5.58 and 5.88 MeV. Measurements of probabilities of proton spin flipping at inelastic scattering by sup(54,56)Fe nuclei are performed in the process of studying spin-orbital effects and mechanisms of the reaction course. A conclusion is made that the inelastic scattering process in the energy range under investigation is mainly realized by two equivalent mechanisms: direct interaction and formation of a compound nucleus. Angular dependences for 54 Fe and 56 Fe noticeably differ in the values of probability of spin flip in the angular range of 50-150 deg

  11. Decision making with consonant belief functions: Discrepancy resulting with the probability transformation method used

    Directory of Open Access Journals (Sweden)

    Cinicioglu Esma Nur

    2014-01-01

    Full Text Available Dempster−Shafer belief function theory can address a wider class of uncertainty than the standard probability theory does, and this fact appeals the researchers in operations research society for potential application areas. However, the lack of a decision theory of belief functions gives rise to the need to use the probability transformation methods for decision making. For representation of statistical evidence, the class of consonant belief functions is used which is not closed under Dempster’s rule of combination but is closed under Walley’s rule of combination. In this research, it is shown that the outcomes obtained using both Dempster’s and Walley’s rules do result in different probability distributions when pignistic transformation is used. However, when plausibility transformation is used, they do result in the same probability distribution. This result shows that the choice of the combination rule and probability transformation method may have a significant effect on decision making since it may change the choice of the decision alternative selected. This result is illustrated via an example of missile type identification.

  12. Continuation of probability density functions using a generalized Lyapunov approach

    NARCIS (Netherlands)

    Baars, S.; Viebahn, J. P.; Mulder, T. E.; Kuehn, C.; Wubs, F. W.; Dijkstra, H. A.

    2017-01-01

    Techniques from numerical bifurcation theory are very useful to study transitions between steady fluid flow patterns and the instabilities involved. Here, we provide computational methodology to use parameter continuation in determining probability density functions of systems of stochastic partial

  13. Three-dimensional analytic probabilities of coupled vibrational-rotational-translational energy transfer for DSMC modeling of nonequilibrium flows

    International Nuclear Information System (INIS)

    Adamovich, Igor V.

    2014-01-01

    A three-dimensional, nonperturbative, semiclassical analytic model of vibrational energy transfer in collisions between a rotating diatomic molecule and an atom, and between two rotating diatomic molecules (Forced Harmonic Oscillator–Free Rotation model) has been extended to incorporate rotational relaxation and coupling between vibrational, translational, and rotational energy transfer. The model is based on analysis of semiclassical trajectories of rotating molecules interacting by a repulsive exponential atom-to-atom potential. The model predictions are compared with the results of three-dimensional close-coupled semiclassical trajectory calculations using the same potential energy surface. The comparison demonstrates good agreement between analytic and numerical probabilities of rotational and vibrational energy transfer processes, over a wide range of total collision energies, rotational energies, and impact parameter. The model predicts probabilities of single-quantum and multi-quantum vibrational-rotational transitions and is applicable up to very high collision energies and quantum numbers. Closed-form analytic expressions for these transition probabilities lend themselves to straightforward incorporation into DSMC nonequilibrium flow codes

  14. Influence of the level of fit of a density probability function to wind-speed data on the WECS mean power output estimation

    International Nuclear Information System (INIS)

    Carta, Jose A.; Ramirez, Penelope; Velazquez, Sergio

    2008-01-01

    Static methods which are based on statistical techniques to estimate the mean power output of a WECS (wind energy conversion system) have been widely employed in the scientific literature related to wind energy. In the static method which we use in this paper, for a given wind regime probability distribution function and a known WECS power curve, the mean power output of a WECS is obtained by resolving the integral, usually using numerical evaluation techniques, of the product of these two functions. In this paper an analysis is made of the influence of the level of fit between an empirical probability density function of a sample of wind speeds and the probability density function of the adjusted theoretical model on the relative error ε made in the estimation of the mean annual power output of a WECS. The mean power output calculated through the use of a quasi-dynamic or chronological method, that is to say using time-series of wind speed data and the power versus wind speed characteristic of the wind turbine, serves as the reference. The suitability of the distributions is judged from the adjusted R 2 statistic (R a 2 ). Hourly mean wind speeds recorded at 16 weather stations located in the Canarian Archipelago, an extensive catalogue of wind-speed probability models and two wind turbines of 330 and 800 kW rated power are used in this paper. Among the general conclusions obtained, the following can be pointed out: (a) that the R a 2 statistic might be useful as an initial gross indicator of the relative error made in the mean annual power output estimation of a WECS when a probabilistic method is employed; (b) the relative errors tend to decrease, in accordance with a trend line defined by a second-order polynomial, as R a 2 increases

  15. The distribution function of a probability measure on a space with a fractal structure

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Granero, M.A.; Galvez-Rodriguez, J.F.

    2017-07-01

    In this work we show how to define a probability measure with the help of a fractal structure. One of the keys of this approach is to use the completion of the fractal structure. Then we use the theory of a cumulative distribution function on a Polish ultrametric space and describe it in this context. Finally, with the help of fractal structures, we prove that a function satisfying the properties of a cumulative distribution function on a Polish ultrametric space is a cumulative distribution function with respect to some probability measure on the space. (Author)

  16. Theoretical determination of gamma spectrometry systems efficiency based on probability functions. Application to self-attenuation correction factors

    Energy Technology Data Exchange (ETDEWEB)

    Barrera, Manuel, E-mail: manuel.barrera@uca.es [Escuela Superior de Ingeniería, University of Cadiz, Avda, Universidad de Cadiz 10, 11519 Puerto Real, Cadiz (Spain); Suarez-Llorens, Alfonso [Facultad de Ciencias, University of Cadiz, Avda, Rep. Saharaui s/n, 11510 Puerto Real, Cadiz (Spain); Casas-Ruiz, Melquiades; Alonso, José J.; Vidal, Juan [CEIMAR, University of Cadiz, Avda, Rep. Saharaui s/n, 11510 Puerto Real, Cádiz (Spain)

    2017-05-11

    A generic theoretical methodology for the calculation of the efficiency of gamma spectrometry systems is introduced in this work. The procedure is valid for any type of source and detector and can be applied to determine the full energy peak and the total efficiency of any source-detector system. The methodology is based on the idea of underlying probability of detection, which describes the physical model for the detection of the gamma radiation at the particular studied situation. This probability depends explicitly on the direction of the gamma radiation, allowing the use of this dependence the development of more realistic and complex models than the traditional models based on the point source integration. The probability function that has to be employed in practice must reproduce the relevant characteristics of the detection process occurring at the particular studied situation. Once the probability is defined, the efficiency calculations can be performed in general by using numerical methods. Monte Carlo integration procedure is especially useful to perform the calculations when complex probability functions are used. The methodology can be used for the direct determination of the efficiency and also for the calculation of corrections that require this determination of the efficiency, as it is the case of coincidence summing, geometric or self-attenuation corrections. In particular, we have applied the procedure to obtain some of the classical self-attenuation correction factors usually employed to correct for the sample attenuation of cylindrical geometry sources. The methodology clarifies the theoretical basis and approximations associated to each factor, by making explicit the probability which is generally hidden and implicit to each model. It has been shown that most of these self-attenuation correction factors can be derived by using a common underlying probability, having this probability a growing level of complexity as it reproduces more precisely

  17. Computing exact bundle compliance control charts via probability generating functions.

    Science.gov (United States)

    Chen, Binchao; Matis, Timothy; Benneyan, James

    2016-06-01

    Compliance to evidenced-base practices, individually and in 'bundles', remains an important focus of healthcare quality improvement for many clinical conditions. The exact probability distribution of composite bundle compliance measures used to develop corresponding control charts and other statistical tests is based on a fairly large convolution whose direct calculation can be computationally prohibitive. Various series expansions and other approximation approaches have been proposed, each with computational and accuracy tradeoffs, especially in the tails. This same probability distribution also arises in other important healthcare applications, such as for risk-adjusted outcomes and bed demand prediction, with the same computational difficulties. As an alternative, we use probability generating functions to rapidly obtain exact results and illustrate the improved accuracy and detection over other methods. Numerical testing across a wide range of applications demonstrates the computational efficiency and accuracy of this approach.

  18. Exact calculation of loop formation probability identifies folding motifs in RNA secondary structures

    Science.gov (United States)

    Sloma, Michael F.; Mathews, David H.

    2016-01-01

    RNA secondary structure prediction is widely used to analyze RNA sequences. In an RNA partition function calculation, free energy nearest neighbor parameters are used in a dynamic programming algorithm to estimate statistical properties of the secondary structure ensemble. Previously, partition functions have largely been used to estimate the probability that a given pair of nucleotides form a base pair, the conditional stacking probability, the accessibility to binding of a continuous stretch of nucleotides, or a representative sample of RNA structures. Here it is demonstrated that an RNA partition function can also be used to calculate the exact probability of formation of hairpin loops, internal loops, bulge loops, or multibranch loops at a given position. This calculation can also be used to estimate the probability of formation of specific helices. Benchmarking on a set of RNA sequences with known secondary structures indicated that loops that were calculated to be more probable were more likely to be present in the known structure than less probable loops. Furthermore, highly probable loops are more likely to be in the known structure than the set of loops predicted in the lowest free energy structures. PMID:27852924

  19. Probability distributions in conservative energy exchange models of multiple interacting agents

    International Nuclear Information System (INIS)

    Scafetta, Nicola; West, Bruce J

    2007-01-01

    Herein we study energy exchange models of multiple interacting agents that conserve energy in each interaction. The models differ regarding the rules that regulate the energy exchange and boundary effects. We find a variety of stochastic behaviours that manifest energy equilibrium probability distributions of different types and interaction rules that yield not only the exponential distributions such as the familiar Maxwell-Boltzmann-Gibbs distribution of an elastically colliding ideal particle gas, but also uniform distributions, truncated exponential distributions, Gaussian distributions, Gamma distributions, inverse power law distributions, mixed exponential and inverse power law distributions, and evolving distributions. This wide variety of distributions should be of value in determining the underlying mechanisms generating the statistical properties of complex phenomena including those to be found in complex chemical reactions

  20. Sharp Bounds by Probability-Generating Functions and Variable Drift

    DEFF Research Database (Denmark)

    Doerr, Benjamin; Fouz, Mahmoud; Witt, Carsten

    2011-01-01

    We introduce to the runtime analysis of evolutionary algorithms two powerful techniques: probability-generating functions and variable drift analysis. They are shown to provide a clean framework for proving sharp upper and lower bounds. As an application, we improve the results by Doerr et al....... (GECCO 2010) in several respects. First, the upper bound on the expected running time of the most successful quasirandom evolutionary algorithm for the OneMax function is improved from 1.28nln n to 0.982nlnn, which breaks the barrier of nln n posed by coupon-collector processes. Compared to the classical...

  1. Using the probability method for multigroup calculations of reactor cells in a thermal energy range

    International Nuclear Information System (INIS)

    Rubin, I.E.; Pustoshilova, V.S.

    1984-01-01

    The possibility of using the transmission probability method with performance inerpolation for determining spatial-energy neutron flux distribution in cells of thermal heterogeneous reactors is considered. The results of multigroup calculations of several uranium-water plane and cylindrical cells with different fuel enrichment in a thermal energy range are given. A high accuracy of results is obtained with low computer time consumption. The use of the transmission probability method is particularly reasonable in algorithms of the programmes compiled computer with significant reserve of internal memory

  2. Outage Probability Analysis in Power-Beacon Assisted Energy Harvesting Cognitive Relay Wireless Networks

    Directory of Open Access Journals (Sweden)

    Ngoc Phuc Le

    2017-01-01

    Full Text Available We study the performance of the secondary relay system in a power-beacon (PB assisted energy harvesting cognitive relay wireless network. In our system model, a secondary source node and a relay node first harvest energy from distributed PBs. Then, the source node transmits its data to the destination node with the help of the relay node. Also, fading coefficients of the links from the PBs to the source node and relay node are assumed independent but not necessarily identically distributed (i.n.i.d Nakagami-m random variables. We derive exact expressions for the power outage probability and the channel outage probability. Based on that, we analyze the total outage probability of the secondary relay system. Asymptotic analysis is also performed, which provides insights into the system behavior. Moreover, we evaluate impacts of the primary network on the performance of the secondary network with respect to the tolerant interference threshold at the primary receiver as well as the interference introduced by the primary transmitter at the secondary source and relay nodes. Simulation results are provided to validate the analysis.

  3. Implementation of the probability table method in a continuous-energy Monte Carlo code system

    International Nuclear Information System (INIS)

    Sutton, T.M.; Brown, F.B.

    1998-10-01

    RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5

  4. Elements of a function analytic approach to probability.

    Energy Technology Data Exchange (ETDEWEB)

    Ghanem, Roger Georges (University of Southern California, Los Angeles, CA); Red-Horse, John Robert

    2008-02-01

    We first provide a detailed motivation for using probability theory as a mathematical context in which to analyze engineering and scientific systems that possess uncertainties. We then present introductory notes on the function analytic approach to probabilistic analysis, emphasizing the connections to various classical deterministic mathematical analysis elements. Lastly, we describe how to use the approach as a means to augment deterministic analysis methods in a particular Hilbert space context, and thus enable a rigorous framework for commingling deterministic and probabilistic analysis tools in an application setting.

  5. Updated greenhouse gas and criteria air pollutant emission factors and their probability distribution functions for electricity generating units

    International Nuclear Information System (INIS)

    Cai, H.; Wang, M.; Elgowainy, A.; Han, J.

    2012-01-01

    Greenhouse gas (CO 2 , CH 4 and N 2 O, hereinafter GHG) and criteria air pollutant (CO, NO x , VOC, PM 10 , PM 2.5 and SO x , hereinafter CAP) emission factors for various types of power plants burning various fuels with different technologies are important upstream parameters for estimating life-cycle emissions associated with alternative vehicle/fuel systems in the transportation sector, especially electric vehicles. The emission factors are typically expressed in grams of GHG or CAP per kWh of electricity generated by a specific power generation technology. This document describes our approach for updating and expanding GHG and CAP emission factors in the GREET (Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation) model developed at Argonne National Laboratory (see Wang 1999 and the GREET website at http://greet.es.anl.gov/main) for various power generation technologies. These GHG and CAP emissions are used to estimate the impact of electricity use by stationary and transportation applications on their fuel-cycle emissions. The electricity generation mixes and the fuel shares attributable to various combustion technologies at the national, regional and state levels are also updated in this document. The energy conversion efficiencies of electric generating units (EGUs) by fuel type and combustion technology are calculated on the basis of the lower heating values of each fuel, to be consistent with the basis used in GREET for transportation fuels. On the basis of the updated GHG and CAP emission factors and energy efficiencies of EGUs, the probability distribution functions (PDFs), which are functions that describe the relative likelihood for the emission factors and energy efficiencies as random variables to take on a given value by the integral of their own probability distributions, are updated using best-fit statistical curves to characterize the uncertainties associated with GHG and CAP emissions in life-cycle modeling with GREET.

  6. Assumed Probability Density Functions for Shallow and Deep Convection

    OpenAIRE

    Steven K Krueger; Peter A Bogenschutz; Marat Khairoutdinov

    2010-01-01

    The assumed joint probability density function (PDF) between vertical velocity and conserved temperature and total water scalars has been suggested to be a relatively computationally inexpensive and unified subgrid-scale (SGS) parameterization for boundary layer clouds and turbulent moments. This paper analyzes the performance of five families of PDFs using large-eddy simulations of deep convection, shallow convection, and a transition from stratocumulus to trade wind cumulus. Three of the PD...

  7. Energies and transition probabilities from the full solution of nuclear quadrupole-octupole model

    International Nuclear Information System (INIS)

    Strecker, M.; Lenske, H.; Minkov, N.

    2013-01-01

    A collective model of nuclear quadrupole-octupole vibrations and rotations, originally restricted to a coherent interplay between quadrupole and octupole modes, is now developed for application beyond this restriction. The eigenvalue problem is solved by diagonalizing the unrestricted Hamiltonian in the basis of the analytic solution obtained in the case of the coherent-mode assumption. Within this scheme the yrast alternating-parity band is constructed by the lowest eigenvalues having the appropriate parity at given angular momentum. Additionally we include the calculation of transition probabilities which are fitted with the energies simultaneously. As a result we obtain a unique set of parameters. The obtained model parameters unambiguously determine the shape of the quadrupole-octupole potential. From the resulting wave functions quadrupole deformation expectation values are calculated which are found to be in agreement with experimental values. (author)

  8. Impact parameter dependence of inner-shell ionization probabilities

    International Nuclear Information System (INIS)

    Cocke, C.L.

    1974-01-01

    The probability for ionization of an inner shell of a target atom by a heavy charged projectile is a sensitive function of the impact parameter characterizing the collision. This probability can be measured experimentally by detecting the x-ray resulting from radiative filling of the inner shell in coincidence with the projectile scattered at a determined angle, and by using the scattering angle to deduce the impact parameter. It is conjectured that the functional dependence of the ionization probability may be a more sensitive probe of the ionization mechanism than is a total cross section measurement. Experimental results for the K-shell ionization of both solid and gas targets by oxygen, carbon and fluorine projectiles in the MeV/amu energy range will be presented, and their use in illuminating the inelastic collision process discussed

  9. Exact probability distribution function for the volatility of cumulative production

    Science.gov (United States)

    Zadourian, Rubina; Klümper, Andreas

    2018-04-01

    In this paper we study the volatility and its probability distribution function for the cumulative production based on the experience curve hypothesis. This work presents a generalization of the study of volatility in Lafond et al. (2017), which addressed the effects of normally distributed noise in the production process. Due to its wide applicability in industrial and technological activities we present here the mathematical foundation for an arbitrary distribution function of the process, which we expect will pave the future research on forecasting of the production process.

  10. Compact baby universe model in ten dimension and probability function of quantum gravity

    International Nuclear Information System (INIS)

    Yan Jun; Hu Shike

    1991-01-01

    The quantum probability functions are calculated for ten-dimensional compact baby universe model. The authors find that the probability for the Yang-Mills baby universe to undergo a spontaneous compactification down to a four-dimensional spacetime is greater than that to remain in the original homogeneous multidimensional state. Some questions about large-wormhole catastrophe are also discussed

  11. Influence of the level of fit of a density probability function to wind-speed data on the WECS mean power output estimation

    Energy Technology Data Exchange (ETDEWEB)

    Carta, Jose A. [Department of Mechanical Engineering, University of Las Palmas de Gran Canaria, Campus de Tafira s/n, 35017 Las Palmas de Gran Canaria, Canary Islands (Spain); Ramirez, Penelope; Velazquez, Sergio [Department of Renewable Energies, Technological Institute of the Canary Islands, Pozo Izquierdo Beach s/n, 35119 Santa Lucia, Gran Canaria, Canary Islands (Spain)

    2008-10-15

    Static methods which are based on statistical techniques to estimate the mean power output of a WECS (wind energy conversion system) have been widely employed in the scientific literature related to wind energy. In the static method which we use in this paper, for a given wind regime probability distribution function and a known WECS power curve, the mean power output of a WECS is obtained by resolving the integral, usually using numerical evaluation techniques, of the product of these two functions. In this paper an analysis is made of the influence of the level of fit between an empirical probability density function of a sample of wind speeds and the probability density function of the adjusted theoretical model on the relative error {epsilon} made in the estimation of the mean annual power output of a WECS. The mean power output calculated through the use of a quasi-dynamic or chronological method, that is to say using time-series of wind speed data and the power versus wind speed characteristic of the wind turbine, serves as the reference. The suitability of the distributions is judged from the adjusted R{sup 2} statistic (R{sub a}{sup 2}). Hourly mean wind speeds recorded at 16 weather stations located in the Canarian Archipelago, an extensive catalogue of wind-speed probability models and two wind turbines of 330 and 800 kW rated power are used in this paper. Among the general conclusions obtained, the following can be pointed out: (a) that the R{sub a}{sup 2} statistic might be useful as an initial gross indicator of the relative error made in the mean annual power output estimation of a WECS when a probabilistic method is employed; (b) the relative errors tend to decrease, in accordance with a trend line defined by a second-order polynomial, as R{sub a}{sup 2} increases. (author)

  12. The trapping of potassium atoms by a polycrystalline tungsten surface as a function of energy and angle of incidence. ch. 1

    International Nuclear Information System (INIS)

    Hurkmans, A.; Overbosch, E.G.; Olander, D.R.; Los, J.

    1976-01-01

    The trapping probability of potassium atoms on a polycrystalline tungsten surface has been measured as a function of the angle of incidence and as a function of the energy of the incoming atoms. Below an energy of 1 eV the trapping was complete; above 20 eV only reflection occurred. The trapping probability increased with increasing angle of incidence. The measurements are compared with a simple model of the fraction of atoms initially trapped. The model, a one-dimensional cube model including a Boltzmann distribution of the velocities of oscillating surface atoms, partially explains the data. The trapping probability as a function of incoming energy is well described for normal incidence, justifying the inclusion of thermal motion of the surface atoms in the model. The angular dependence can be explained in a qualitative way, although there is a substantial discrepancy for large angles of incidence, showing the presence of surface structure. (Auth.)

  13. On the magnetization process and the associated probability in anisotropic cubic crystals

    Energy Technology Data Exchange (ETDEWEB)

    Khedr, D.M., E-mail: doaamohammed88@gmail.com [Department of Basic Science, Modern Academy of Engineering and Technology at Maadi, Cairo (Egypt); Aly, Samy H.; Shabara, Reham M. [Department of Physics, Faculty of Science at Damietta, University of Damietta, Damietta (Egypt); Yehia, Sherif [Department of Physics, Faculty of Science at Helwan, University of Helwan, Helwan (Egypt)

    2017-05-15

    We present a theoretical method to calculate specific magnetic properties, e.g. magnetization curves, magnetic susceptibility and probability landscapes along the [100], [110] and [111] crystallographic directions of a crystal of cubic symmetry. The probability landscape displays the evolution of the most probable angular orientation of the magnetization vector, for selected temperatures and magnetic fields. Our method is based on the premises of classical statistical mechanics. The energy density, used in the partition function, is the sum of magnetic anisotropy and Zeeman energies, however no other energies e.g. elastic or magnetoelastic terms are considered in the present work. Model cubic systems of diverse anisotropies are analyzed first, and subsequently material magnetic systems of cubic symmetry; namely iron, nickel and Co{sub x} Fe{sub 100−x} compounds, are discussed. We highlight a correlation between magnetization curves and the associated probability landscapes. In addition, determination of easiest axes of magnetization, using energy consideration, is done and compared with the results of the present method.

  14. Probability an introduction

    CERN Document Server

    Goldberg, Samuel

    1960-01-01

    Excellent basic text covers set theory, probability theory for finite sample spaces, binomial theorem, probability distributions, means, standard deviations, probability function of binomial distribution, more. Includes 360 problems with answers for half.

  15. Theoretical derivation of wind power probability distribution function and applications

    International Nuclear Information System (INIS)

    Altunkaynak, Abdüsselam; Erdik, Tarkan; Dabanlı, İsmail; Şen, Zekai

    2012-01-01

    Highlights: ► Derivation of wind power stochastic characteristics are standard deviation and the dimensionless skewness. ► The perturbation is expressions for the wind power statistics from Weibull probability distribution function (PDF). ► Comparisons with the corresponding characteristics of wind speed PDF abides by the Weibull PDF. ► The wind power abides with the Weibull-PDF. -- Abstract: The instantaneous wind power contained in the air current is directly proportional with the cube of the wind speed. In practice, there is a record of wind speeds in the form of a time series. It is, therefore, necessary to develop a formulation that takes into consideration the statistical parameters of such a time series. The purpose of this paper is to derive the general wind power formulation in terms of the statistical parameters by using the perturbation theory, which leads to a general formulation of the wind power expectation and other statistical parameter expressions such as the standard deviation and the coefficient of variation. The formulation is very general and can be applied specifically for any wind speed probability distribution function. Its application to two-parameter Weibull probability distribution of wind speeds is presented in full detail. It is concluded that provided wind speed is distributed according to a Weibull distribution, the wind power could be derived based on wind speed data. It is possible to determine wind power at any desired risk level, however, in practical studies most often 5% or 10% risk levels are preferred and the necessary simple procedure is presented for this purpose in this paper.

  16. Evaluation of probability and hazard in nuclear energy

    International Nuclear Information System (INIS)

    Novikov, V.Ya.; Romanov, N.L.

    1979-01-01

    Various methods of evaluation of accident probability on NPP are proposed because of NPP security statistic evaluation unreliability. The conception of subjective probability for quantitative analysis of security and hazard are described. Intrepretation of probability as real faith of an expert is assumed as a basis of the conception. It is suggested to study the event uncertainty in the framework of subjective probability theory which not only permits but demands to take into account expert opinions when evaluating the probability. These subjective expert evaluations effect to a certain extent the calculation of the usual mathematical event probability. The above technique is advantageous to use for consideration of a separate experiment or random event

  17. Estimation of functional failure probability of passive systems based on adaptive importance sampling method

    International Nuclear Information System (INIS)

    Wang Baosheng; Wang Dongqing; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to estimate the functional failure probability of passive systems, an innovative adaptive importance sampling methodology is presented. In the proposed methodology, information of variables is extracted with some pre-sampling of points in the failure region. An important sampling density is then constructed from the sample distribution in the failure region. Taking the AP1000 passive residual heat removal system as an example, the uncertainties related to the model of a passive system and the numerical values of its input parameters are considered in this paper. And then the probability of functional failure is estimated with the combination of the response surface method and adaptive importance sampling method. The numerical results demonstrate the high computed efficiency and excellent computed accuracy of the methodology compared with traditional probability analysis methods. (authors)

  18. Non-monotonic probability of thermal reversal in thin-film biaxial nanomagnets with small energy barriers

    Directory of Open Access Journals (Sweden)

    N. Kani

    2017-05-01

    Full Text Available The goal of this paper is to investigate the short time-scale, thermally-induced probability of magnetization reversal for an biaxial nanomagnet that is characterized with a biaxial magnetic anisotropy. For the first time, we clearly show that for a given energy barrier of the nanomagnet, the magnetization reversal probability of an biaxial nanomagnet exhibits a non-monotonic dependence on its saturation magnetization. Specifically, there are two reasons for this non-monotonic behavior in rectangular thin-film nanomagnets that have a large perpendicular magnetic anisotropy. First, a large perpendicular anisotropy lowers the precessional period of the magnetization making it more likely to precess across the x^=0 plane if the magnetization energy exceeds the energy barrier. Second, the thermal-field torque at a particular energy increases as the magnitude of the perpendicular anisotropy increases during the magnetization precession. This non-monotonic behavior is most noticeable when analyzing the magnetization reversals on time-scales up to several tens of ns. In light of the several proposals of spintronic devices that require data retention on time-scales up to 10’s of ns, understanding the probability of magnetization reversal on the short time-scales is important. As such, the results presented in this paper will be helpful in quantifying the reliability and noise sensitivity of spintronic devices in which thermal noise is inevitably present.

  19. Outage Probability Minimization for Energy Harvesting Cognitive Radio Sensor Networks

    Directory of Open Access Journals (Sweden)

    Fan Zhang

    2017-01-01

    Full Text Available The incorporation of cognitive radio (CR capability in wireless sensor networks yields a promising network paradigm known as CR sensor networks (CRSNs, which is able to provide spectrum efficient data communication. However, due to the high energy consumption results from spectrum sensing, as well as subsequent data transmission, the energy supply for the conventional sensor nodes powered by batteries is regarded as a severe bottleneck for sustainable operation. The energy harvesting technique, which gathers energy from the ambient environment, is regarded as a promising solution to perpetually power-up energy-limited devices with a continual source of energy. Therefore, applying the energy harvesting (EH technique in CRSNs is able to facilitate the self-sustainability of the energy-limited sensors. The primary concern of this study is to design sensing-transmission policies to minimize the long-term outage probability of EH-powered CR sensor nodes. We formulate this problem as an infinite-horizon discounted Markov decision process and propose an ϵ-optimal sensing-transmission (ST policy through using the value iteration algorithm. ϵ is the error bound between the ST policy and the optimal policy, which can be pre-defined according to the actual need. Moreover, for a special case that the signal-to-noise (SNR power ratio is sufficiently high, we present an efficient transmission (ET policy and prove that the ET policy achieves the same performance with the ST policy. Finally, extensive simulations are conducted to evaluate the performance of the proposed policies and the impaction of various network parameters.

  20. Updated greenhouse gas and criteria air pollutant emission factors and their probability distribution functions for electricity generating units

    Energy Technology Data Exchange (ETDEWEB)

    Cai, H.; Wang, M.; Elgowainy, A.; Han, J. (Energy Systems)

    2012-07-06

    Greenhouse gas (CO{sub 2}, CH{sub 4} and N{sub 2}O, hereinafter GHG) and criteria air pollutant (CO, NO{sub x}, VOC, PM{sub 10}, PM{sub 2.5} and SO{sub x}, hereinafter CAP) emission factors for various types of power plants burning various fuels with different technologies are important upstream parameters for estimating life-cycle emissions associated with alternative vehicle/fuel systems in the transportation sector, especially electric vehicles. The emission factors are typically expressed in grams of GHG or CAP per kWh of electricity generated by a specific power generation technology. This document describes our approach for updating and expanding GHG and CAP emission factors in the GREET (Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation) model developed at Argonne National Laboratory (see Wang 1999 and the GREET website at http://greet.es.anl.gov/main) for various power generation technologies. These GHG and CAP emissions are used to estimate the impact of electricity use by stationary and transportation applications on their fuel-cycle emissions. The electricity generation mixes and the fuel shares attributable to various combustion technologies at the national, regional and state levels are also updated in this document. The energy conversion efficiencies of electric generating units (EGUs) by fuel type and combustion technology are calculated on the basis of the lower heating values of each fuel, to be consistent with the basis used in GREET for transportation fuels. On the basis of the updated GHG and CAP emission factors and energy efficiencies of EGUs, the probability distribution functions (PDFs), which are functions that describe the relative likelihood for the emission factors and energy efficiencies as random variables to take on a given value by the integral of their own probability distributions, are updated using best-fit statistical curves to characterize the uncertainties associated with GHG and CAP emissions in life

  1. Probability density of wave function of excited photoelectron: understanding XANES features

    Czech Academy of Sciences Publication Activity Database

    Šipr, Ondřej

    2001-01-01

    Roč. 8, - (2001), s. 232-234 ISSN 0909-0495 R&D Projects: GA ČR GA202/99/0404 Institutional research plan: CEZ:A02/98:Z1-010-914 Keywords : XANES * PED - probability density of wave function Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.519, year: 2001

  2. Bounds for the probability distribution function of the linear ACD process

    OpenAIRE

    Fernandes, Marcelo

    2003-01-01

    Rio de Janeiro This paper derives both lower and upper bounds for the probability distribution function of stationary ACD(p, q) processes. For the purpose of illustration, I specialize the results to the main parent distributions in duration analysis. Simulations show that the lower bound is much tighter than the upper bound.

  3. Probability distribution for the Gaussian curvature of the zero level surface of a random function

    Science.gov (United States)

    Hannay, J. H.

    2018-04-01

    A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z)  =  0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f  =  0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.

  4. Quantum Probabilities as Behavioral Probabilities

    Directory of Open Access Journals (Sweden)

    Vyacheslav I. Yukalov

    2017-03-01

    Full Text Available We demonstrate that behavioral probabilities of human decision makers share many common features with quantum probabilities. This does not imply that humans are some quantum objects, but just shows that the mathematics of quantum theory is applicable to the description of human decision making. The applicability of quantum rules for describing decision making is connected with the nontrivial process of making decisions in the case of composite prospects under uncertainty. Such a process involves deliberations of a decision maker when making a choice. In addition to the evaluation of the utilities of considered prospects, real decision makers also appreciate their respective attractiveness. Therefore, human choice is not based solely on the utility of prospects, but includes the necessity of resolving the utility-attraction duality. In order to justify that human consciousness really functions similarly to the rules of quantum theory, we develop an approach defining human behavioral probabilities as the probabilities determined by quantum rules. We show that quantum behavioral probabilities of humans do not merely explain qualitatively how human decisions are made, but they predict quantitative values of the behavioral probabilities. Analyzing a large set of empirical data, we find good quantitative agreement between theoretical predictions and observed experimental data.

  5. Meteorological evaluation of multiple reactor contamination probabilities for a Hanford Nuclear Energy Center

    International Nuclear Information System (INIS)

    Ramsdell, J.V.; Diebel, D.I.

    1978-03-01

    The conceptual Hanford energy center is composed of nuclear power plants, hence the name Hanford Nuclear Energy Center (HNEC). Previous topical reports have covered a variety of subjects related to the HNEC including: electric power transmission, fuel cycle, and heat disposal. This report discusses the probability that a radiation release from a single reactor in the HNEC would contaminate other facilities in the center. The risks, in terms of reliability of generation, of this potential contamination are examined by Clark and Dowis

  6. Energy levels and transition probabilities for Fe XXV ions

    Energy Technology Data Exchange (ETDEWEB)

    Norrington, P.H.; Kingston, A.E.; Boone, A.W. [Department of Applied Maths and Theoretical Physics, Queen' s University, Belfast BT7 1NN (United Kingdom)

    2000-05-14

    The energy levels of the 1s{sup 2}, 1s2l and 1s3l states of helium-like iron Fe XXV have been calculated using two sets of configuration-interaction wavefunctions. One set of wavefunctions was generated using the fully relativistic GRASP code and the other was obtained using CIV3, in which relativistic effects are introduced using the Breit-Pauli approximation. For transitions from the ground state to the n=2 and 3 states and for transitions between the n=2 and 3 states, the calculated excitation energies obtained by these two independent methods are in very good agreement and there is good agreement between these results and recent theoretical and experimental results. However, there is considerable disagreement between the various excitation energies for the transitions among the n=2 and also among the n=3 states. The two sets of wavefunctions are also used to calculate the E1, E2, M1 and M2 transition probabilities between all of the 1s{sup 2}, 1s2l and 1s3l states of helium-like iron Fe XXV. The results from the two calculations are found to be similar and to compare very well with other recent results for {delta}n=1 or 2 transitions. For {delta}n=0 transitions the agreement is much less satisfactory; this is mainly due to differences in the excitation energies. (author)

  7. Use of probability tables for propagating uncertainties in neutronics

    International Nuclear Information System (INIS)

    Coste-Delclaux, M.; Diop, C.M.; Lahaye, S.

    2017-01-01

    Highlights: • Moment-based probability table formalism is described. • Representation by probability tables of any uncertainty distribution is established. • Multiband equations for two kinds of uncertainty propagation problems are solved. • Numerical examples are provided and validated against Monte Carlo simulations. - Abstract: Probability tables are a generic tool that allows representing any random variable whose probability density function is known. In the field of nuclear reactor physics, this tool is currently used to represent the variation of cross-sections versus energy (neutron transport codes TRIPOLI4®, MCNP, APOLLO2, APOLLO3®, ECCO/ERANOS…). In the present article we show how we can propagate uncertainties, thanks to a probability table representation, through two simple physical problems: an eigenvalue problem (neutron multiplication factor) and a depletion problem.

  8. Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function

    Science.gov (United States)

    Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.

    2011-01-01

    In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.

  9. Probability for statisticians

    CERN Document Server

    Shorack, Galen R

    2017-01-01

    This 2nd edition textbook offers a rigorous introduction to measure theoretic probability with particular attention to topics of interest to mathematical statisticians—a textbook for courses in probability for students in mathematical statistics. It is recommended to anyone interested in the probability underlying modern statistics, providing a solid grounding in the probabilistic tools and techniques necessary to do theoretical research in statistics. For the teaching of probability theory to post graduate statistics students, this is one of the most attractive books available. Of particular interest is a presentation of the major central limit theorems via Stein's method either prior to or alternative to a characteristic function presentation. Additionally, there is considerable emphasis placed on the quantile function as well as the distribution function. The bootstrap and trimming are both presented. Martingale coverage includes coverage of censored data martingales. The text includes measure theoretic...

  10. Probability of Failure in Random Vibration

    DEFF Research Database (Denmark)

    Nielsen, Søren R.K.; Sørensen, John Dalsgaard

    1988-01-01

    Close approximations to the first-passage probability of failure in random vibration can be obtained by integral equation methods. A simple relation exists between the first-passage probability density function and the distribution function for the time interval spent below a barrier before out......-crossing. An integral equation for the probability density function of the time interval is formulated, and adequate approximations for the kernel are suggested. The kernel approximation results in approximate solutions for the probability density function of the time interval and thus for the first-passage probability...

  11. Interactive design of probability density functions for shape grammars

    KAUST Repository

    Dang, Minh

    2015-11-02

    A shape grammar defines a procedural shape space containing a variety of models of the same class, e.g. buildings, trees, furniture, airplanes, bikes, etc. We present a framework that enables a user to interactively design a probability density function (pdf) over such a shape space and to sample models according to the designed pdf. First, we propose a user interface that enables a user to quickly provide preference scores for selected shapes and suggest sampling strategies to decide which models to present to the user to evaluate. Second, we propose a novel kernel function to encode the similarity between two procedural models. Third, we propose a framework to interpolate user preference scores by combining multiple techniques: function factorization, Gaussian process regression, autorelevance detection, and l1 regularization. Fourth, we modify the original grammars to generate models with a pdf proportional to the user preference scores. Finally, we provide evaluations of our user interface and framework parameters and a comparison to other exploratory modeling techniques using modeling tasks in five example shape spaces: furniture, low-rise buildings, skyscrapers, airplanes, and vegetation.

  12. ERG review of containment failure probability and repository functional design criteria

    International Nuclear Information System (INIS)

    Gopal, S.

    1986-06-01

    The Engineering Review Group (ERG) was established by the Office of Nuclear Waste Isolation (ONWI) to help evaluate engineering-related issues in the US Department of Energy's nuclear waste repository program. The June 1984 meeting of the ERG considered two topics: (1) statistical probability for containment of nuclides within the waste package and (2) repository design criteria. This report documents the ERG's comments and recommendations on these two subjects and the ONWI response to the specific points raised by ERG

  13. Off-critical local height probabilities on a plane and critical partition functions on a cylinder

    Directory of Open Access Journals (Sweden)

    Omar Foda

    2018-03-01

    Full Text Available We compute off-critical local height probabilities in regime-III restricted solid-on-solid models in a 4N-quadrant spiral geometry, with periodic boundary conditions in the angular direction, and fixed boundary conditions in the radial direction, as a function of N, the winding number of the spiral, and τ, the departure from criticality of the model, and observe that the result depends only on the product Nτ. In the limit N→1, τ→τ0, such that τ0 is finite, we recover the off-critical local height probability on a plane, τ0-away from criticality. In the limit N→∞, τ→0, such that Nτ=τ0 is finite, and following a conformal transformation, we obtain a critical partition function on a cylinder of aspect-ratio τ0. We conclude that the off-critical local height probability on a plane, τ0-away from criticality, is equal to a critical partition function on a cylinder of aspect-ratio τ0, in agreement with a result of Saleur and Bauer.

  14. Audio feature extraction using probability distribution function

    Science.gov (United States)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  15. Dynamic SEP event probability forecasts

    Science.gov (United States)

    Kahler, S. W.; Ling, A.

    2015-10-01

    The forecasting of solar energetic particle (SEP) event probabilities at Earth has been based primarily on the estimates of magnetic free energy in active regions and on the observations of peak fluxes and fluences of large (≥ M2) solar X-ray flares. These forecasts are typically issued for the next 24 h or with no definite expiration time, which can be deficient for time-critical operations when no SEP event appears following a large X-ray flare. It is therefore important to decrease the event probability forecast with time as a SEP event fails to appear. We use the NOAA listing of major (≥10 pfu) SEP events from 1976 to 2014 to plot the delay times from X-ray peaks to SEP threshold onsets as a function of solar source longitude. An algorithm is derived to decrease the SEP event probabilities with time when no event is observed to reach the 10 pfu threshold. In addition, we use known SEP event size distributions to modify probability forecasts when SEP intensity increases occur below the 10 pfu event threshold. An algorithm to provide a dynamic SEP event forecast, Pd, for both situations of SEP intensities following a large flare is derived.

  16. Critically Evaluated Energy Levels, Spectral Lines, Transition Probabilities, and Intensities of Neutral Vanadium (V i)

    Energy Technology Data Exchange (ETDEWEB)

    Saloman, Edward B. [Dakota Consulting, Inc., 1110 Bonifant Street, Suite 310, Silver Spring, MD 20910 (United States); Kramida, Alexander [National Institute of Standards and Technology, Gaithersburg, MD 20899 (United States)

    2017-08-01

    The energy levels, observed spectral lines, and transition probabilities of the neutral vanadium atom, V i, have been compiled. Also included are values for some forbidden lines that may be of interest to the astrophysical community. Experimental Landé g -factors and leading percentage compositions for the levels are included where available, as well as wavelengths calculated from the energy levels (Ritz wavelengths). Wavelengths are reported for 3985 transitions, and 549 energy levels are determined. The observed relative intensities normalized to a common scale are provided.

  17. Critically Evaluated Energy Levels, Spectral Lines, Transition Probabilities, and Intensities of Singly Ionized Vanadium (V ii)

    Energy Technology Data Exchange (ETDEWEB)

    Saloman, Edward B. [Dakota Consulting, Inc., 1110 Bonifant Street, Suite 310, Silver Spring, MD 20910 (United States); Kramida, Alexander [National Institute of Standards and Technology, Gaithersburg, MD 20899 (United States)

    2017-08-01

    The energy levels, observed spectral lines, and transition probabilities of singly ionized vanadium, V ii, have been compiled. The experimentally derived energy levels belong to the configurations 3 d {sup 4}, 3 d {sup 3} ns ( n  = 4, 5, 6), 3 d {sup 3} np , and 3 d {sup 3} nd ( n  = 4, 5), 3 d {sup 3}4 f , 3 d {sup 2}4 s {sup 2}, and 3 d {sup 2}4 s 4 p . Also included are values for some forbidden lines that may be of interest to the astrophysical community. Experimental Landé g -factors and leading percentages for the levels are included when available, as well as Ritz wavelengths calculated from the energy levels. Wavelengths and transition probabilities are reported for 3568 and 1896 transitions, respectively. From the list of observed wavelengths, 407 energy levels are determined. The observed intensities, normalized to a common scale, are provided. From the newly optimized energy levels, a revised value for the ionization energy is derived, 118,030(60) cm{sup −1}, corresponding to 14.634(7) eV. This is 130 cm{sup −1} higher than the previously recommended value from Iglesias et al.

  18. Robust functional statistics applied to Probability Density Function shape screening of sEMG data.

    Science.gov (United States)

    Boudaoud, S; Rix, H; Al Harrach, M; Marin, F

    2014-01-01

    Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.

  19. Evidence that multiple genetic variants of MC4R play a functional role in the regulation of energy expenditure and appetite in Hispanic children1234

    Science.gov (United States)

    Cole, Shelley A; Voruganti, V Saroja; Cai, Guowen; Haack, Karin; Kent, Jack W; Blangero, John; Comuzzie, Anthony G; McPherson, John D; Gibbs, Richard A

    2010-01-01

    Background: Melanocortin-4-receptor (MC4R) haploinsufficiency is the most common form of monogenic obesity; however, the frequency of MC4R variants and their functional effects in general populations remain uncertain. Objective: The aim was to identify and characterize the effects of MC4R variants in Hispanic children. Design: MC4R was resequenced in 376 parents, and the identified single nucleotide polymorphisms (SNPs) were genotyped in 613 parents and 1016 children from the Viva la Familia cohort. Measured genotype analysis (MGA) tested associations between SNPs and phenotypes. Bayesian quantitative trait nucleotide (BQTN) analysis was used to infer the most likely functional polymorphisms influencing obesity-related traits. Results: Seven rare SNPs in coding and 18 SNPs in flanking regions of MC4R were identified. MGA showed suggestive associations between MC4R variants and body size, adiposity, glucose, insulin, leptin, ghrelin, energy expenditure, physical activity, and food intake. BQTN analysis identified SNP 1704 in a predicted micro-RNA target sequence in the downstream flanking region of MC4R as a strong, probable functional variant influencing total, sedentary, and moderate activities with posterior probabilities of 1.0. SNP 2132 was identified as a variant with a high probability (1.0) of exerting a functional effect on total energy expenditure and sleeping metabolic rate. SNP rs34114122 was selected as having likely functional effects on the appetite hormone ghrelin, with a posterior probability of 0.81. Conclusion: This comprehensive investigation provides strong evidence that MC4R genetic variants are likely to play a functional role in the regulation of weight, not only through energy intake but through energy expenditure. PMID:19889825

  20. Probability tables and gauss quadrature: application to neutron cross-sections in the unresolved energy range

    International Nuclear Information System (INIS)

    Ribon, P.; Maillard, J.M.

    1986-09-01

    The idea of describing neutron cross-section fluctuations by sets of discrete values, called ''probability tables'', was formulated some 15 years ago. We propose to define the probability tables from moments by equating the moments of the actual cross-section distribution in a given energy range to the moments of the table. This definition introduces PADE approximants, orthogonal polynomials and GAUSS quadrature. This mathematical basis applies very well to the total cross-section. Some difficulties appear when partial cross-sections are taken into account, linked to the ambiguity of the definition of multivariate PADE approximants. Nevertheless we propose solutions and choices which appear to be satisfactory. Comparisons are made with other definitions of probability tables and an example of the calculation of a mixture of nuclei is given. 18 refs

  1. Probability tables and gauss quadrature: application to neutron cross-sections in the unresolved energy range

    International Nuclear Information System (INIS)

    Ribon, P.; Maillard, J.M.

    1986-01-01

    The idea of describing neutron cross-section fluctuations by sets of discrete values, called probability tables, was formulated some 15 years ago. The authors propose to define the probability tables from moments by equating the moments of the actual cross-section distribution in a given energy range to the moments of the table. This definition introduces PADE approximants, orthogonal polynomials and GAUSS quadrature. This mathematical basis applies very well to the total cross-section. Some difficulties appear when partial cross-sections are taken into account, linked to the ambiguity of the definition of multivariate PADE approximants. Nevertheless the authors propose solutions and choices which appear to be satisfactory. Comparisons are made with other definition of probability tables and an example of the calculation of a mixture of nuclei is given

  2. The probability representation as a new formulation of quantum mechanics

    International Nuclear Information System (INIS)

    Man'ko, Margarita A; Man'ko, Vladimir I

    2012-01-01

    We present a new formulation of conventional quantum mechanics, in which the notion of a quantum state is identified via a fair probability distribution of the position measured in a reference frame of the phase space with rotated axes. In this formulation, the quantum evolution equation as well as the equation for finding energy levels are expressed as linear equations for the probability distributions that determine the quantum states. We also give the integral transforms relating the probability distribution (called the tomographic-probability distribution or the state tomogram) to the density matrix and the Wigner function and discuss their connection with the Radon transform. Qudit states are considered and the invertible map of the state density operators onto the probability vectors is discussed. The tomographic entropies and entropic uncertainty relations are reviewed. We demonstrate the uncertainty relations for the position and momentum and the entropic uncertainty relations in the tomographic-probability representation, which is suitable for an experimental check of the uncertainty relations.

  3. Noise-level determination for discrete spectra with Gaussian or Lorentzian probability density functions

    International Nuclear Information System (INIS)

    Moriya, Netzer

    2010-01-01

    A method, based on binomial filtering, to estimate the noise level of an arbitrary, smoothed pure signal, contaminated with an additive, uncorrelated noise component is presented. If the noise characteristics of the experimental spectrum are known, as for instance the type of the corresponding probability density function (e.g., Gaussian), the noise properties can be extracted. In such cases, both the noise level, as may arbitrarily be defined, and a simulated white noise component can be generated, such that the simulated noise component is statistically indistinguishable from the true noise component present in the original signal. In this paper we present a detailed analysis of the noise level extraction when the additive noise is Gaussian or Lorentzian. We show that the statistical parameters in these cases (mainly the variance and the half width at half maximum, respectively) can directly be obtained from the experimental spectrum even when the pure signal is erratic. Further discussion is given for cases where the noise probability density function is initially unknown.

  4. Beta-decay rate and beta-delayed neutron emission probability of improved gross theory

    Science.gov (United States)

    Koura, Hiroyuki

    2014-09-01

    A theoretical study has been carried out on beta-decay rate and beta-delayed neutron emission probability. The gross theory of the beta decay is based on an idea of the sum rule of the beta-decay strength function, and has succeeded in describing beta-decay half-lives of nuclei overall nuclear mass region. The gross theory includes not only the allowed transition as the Fermi and the Gamow-Teller, but also the first-forbidden transition. In this work, some improvements are introduced as the nuclear shell correction on nuclear level densities and the nuclear deformation for nuclear strength functions, those effects were not included in the original gross theory. The shell energy and the nuclear deformation for unmeasured nuclei are adopted from the KTUY nuclear mass formula, which is based on the spherical-basis method. Considering the properties of the integrated Fermi function, we can roughly categorized energy region of excited-state of a daughter nucleus into three regions: a highly-excited energy region, which fully affect a delayed neutron probability, a middle energy region, which is estimated to contribute the decay heat, and a region neighboring the ground-state, which determines the beta-decay rate. Some results will be given in the presentation. A theoretical study has been carried out on beta-decay rate and beta-delayed neutron emission probability. The gross theory of the beta decay is based on an idea of the sum rule of the beta-decay strength function, and has succeeded in describing beta-decay half-lives of nuclei overall nuclear mass region. The gross theory includes not only the allowed transition as the Fermi and the Gamow-Teller, but also the first-forbidden transition. In this work, some improvements are introduced as the nuclear shell correction on nuclear level densities and the nuclear deformation for nuclear strength functions, those effects were not included in the original gross theory. The shell energy and the nuclear deformation for

  5. Nonlocal kinetic energy functionals by functional integration

    Science.gov (United States)

    Mi, Wenhui; Genova, Alessandro; Pavanello, Michele

    2018-05-01

    Since the seminal studies of Thomas and Fermi, researchers in the Density-Functional Theory (DFT) community are searching for accurate electron density functionals. Arguably, the toughest functional to approximate is the noninteracting kinetic energy, Ts[ρ], the subject of this work. The typical paradigm is to first approximate the energy functional and then take its functional derivative, δ/Ts[ρ ] δ ρ (r ) , yielding a potential that can be used in orbital-free DFT or subsystem DFT simulations. Here, this paradigm is challenged by constructing the potential from the second-functional derivative via functional integration. A new nonlocal functional for Ts[ρ] is prescribed [which we dub Mi-Genova-Pavanello (MGP)] having a density independent kernel. MGP is constructed to satisfy three exact conditions: (1) a nonzero "Kinetic electron" arising from a nonzero exchange hole; (2) the second functional derivative must reduce to the inverse Lindhard function in the limit of homogenous densities; (3) the potential is derived from functional integration of the second functional derivative. Pilot calculations show that MGP is capable of reproducing accurate equilibrium volumes, bulk moduli, total energy, and electron densities for metallic (body-centered cubic, face-centered cubic) and semiconducting (crystal diamond) phases of silicon as well as of III-V semiconductors. The MGP functional is found to be numerically stable typically reaching self-consistency within 12 iterations of a truncated Newton minimization algorithm. MGP's computational cost and memory requirements are low and comparable to the Wang-Teter nonlocal functional or any generalized gradient approximation functional.

  6. Probability distribution functions for intermittent scrape-off layer plasma fluctuations

    Science.gov (United States)

    Theodorsen, A.; Garcia, O. E.

    2018-03-01

    A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.

  7. EDF: Computing electron number probability distribution functions in real space from molecular wave functions

    Science.gov (United States)

    Francisco, E.; Pendás, A. Martín; Blanco, M. A.

    2008-04-01

    Given an N-electron molecule and an exhaustive partition of the real space ( R) into m arbitrary regions Ω,Ω,…,Ω ( ⋃i=1mΩ=R), the edf program computes all the probabilities P(n,n,…,n) of having exactly n electrons in Ω, n electrons in Ω,…, and n electrons ( n+n+⋯+n=N) in Ω. Each Ω may correspond to a single basin (atomic domain) or several such basins (functional group). In the later case, each atomic domain must belong to a single Ω. The program can manage both single- and multi-determinant wave functions which are read in from an aimpac-like wave function description ( .wfn) file (T.A. Keith et al., The AIMPAC95 programs, http://www.chemistry.mcmaster.ca/aimpac, 1995). For multi-determinantal wave functions a generalization of the original .wfn file has been introduced. The new format is completely backwards compatible, adding to the previous structure a description of the configuration interaction (CI) coefficients and the determinants of correlated wave functions. Besides the .wfn file, edf only needs the overlap integrals over all the atomic domains between the molecular orbitals (MO). After the P(n,n,…,n) probabilities are computed, edf obtains from them several magnitudes relevant to chemical bonding theory, such as average electronic populations and localization/delocalization indices. Regarding spin, edf may be used in two ways: with or without a splitting of the P(n,n,…,n) probabilities into α and β spin components. Program summaryProgram title: edf Catalogue identifier: AEAJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5387 No. of bytes in distributed program, including test data, etc.: 52 381 Distribution format: tar.gz Programming language: Fortran 77 Computer

  8. Quadrupole collective dynamics from energy density functionals: Collective Hamiltonian and the interacting boson model

    International Nuclear Information System (INIS)

    Nomura, K.; Vretenar, D.; Niksic, T.; Otsuka, T.; Shimizu, N.

    2011-01-01

    Microscopic energy density functionals have become a standard tool for nuclear structure calculations, providing an accurate global description of nuclear ground states and collective excitations. For spectroscopic applications, this framework has to be extended to account for collective correlations related to restoration of symmetries broken by the static mean field, and for fluctuations of collective variables. In this paper, we compare two approaches to five-dimensional quadrupole dynamics: the collective Hamiltonian for quadrupole vibrations and rotations and the interacting boson model (IBM). The two models are compared in a study of the evolution of nonaxial shapes in Pt isotopes. Starting from the binding energy surfaces of 192,194,196 Pt, calculated with a microscopic energy density functional, we analyze the resulting low-energy collective spectra obtained from the collective Hamiltonian, and the corresponding IBM Hamiltonian. The calculated excitation spectra and transition probabilities for the ground-state bands and the γ-vibration bands are compared to the corresponding sequences of experimental states.

  9. CGC/saturation approach for soft interactions at high energy: survival probability of central exclusive production

    Energy Technology Data Exchange (ETDEWEB)

    Gotsman, E.; Maor, U. [Tel Aviv University, Department of Particle Physics, Raymond and Beverly Sackler Faculty of Exact Science, School of Physics and Astronomy, Tel Aviv (Israel); Levin, E. [Tel Aviv University, Department of Particle Physics, Raymond and Beverly Sackler Faculty of Exact Science, School of Physics and Astronomy, Tel Aviv (Israel); Universidad Tecnica Federico Santa Maria, Departemento de Fisica, Centro Cientifico-Tecnologico de Valparaiso, Valparaiso (Chile)

    2016-04-15

    We estimate the value of the survival probability for central exclusive production in a model which is based on the CGC/saturation approach. Hard and soft processes are described in the same framework. At LHC energies, we obtain a small value for the survival probability. The source of the small value is the impact parameter dependence of the hard amplitude. Our model has successfully described a large body of soft data: elastic, inelastic and diffractive cross sections, inclusive production and rapidity correlations, as well as the t-dependence of deep inelastic diffractive production of vector mesons. (orig.)

  10. SURFACE SYMMETRY ENERGY OF NUCLEAR ENERGY DENSITY FUNCTIONALS

    Energy Technology Data Exchange (ETDEWEB)

    Nikolov, N; Schunck, N; Nazarewicz, W; Bender, M; Pei, J

    2010-12-20

    We study the bulk deformation properties of the Skyrme nuclear energy density functionals. Following simple arguments based on the leptodermous expansion and liquid drop model, we apply the nuclear density functional theory to assess the role of the surface symmetry energy in nuclei. To this end, we validate the commonly used functional parametrizations against the data on excitation energies of superdeformed band-heads in Hg and Pb isotopes, and fission isomers in actinide nuclei. After subtracting shell effects, the results of our self-consistent calculations are consistent with macroscopic arguments and indicate that experimental data on strongly deformed configurations in neutron-rich nuclei are essential for optimizing future nuclear energy density functionals. The resulting survey provides a useful benchmark for further theoretical improvements. Unlike in nuclei close to the stability valley, whose macroscopic deformability hangs on the balance of surface and Coulomb terms, the deformability of neutron-rich nuclei strongly depends on the surface-symmetry energy; hence, its proper determination is crucial for the stability of deformed phases of the neutron-rich matter and description of fission rates for r-process nucleosynthesis.

  11. Continuation of probability density functions using a generalized Lyapunov approach

    Energy Technology Data Exchange (ETDEWEB)

    Baars, S., E-mail: s.baars@rug.nl [Johann Bernoulli Institute for Mathematics and Computer Science, University of Groningen, P.O. Box 407, 9700 AK Groningen (Netherlands); Viebahn, J.P., E-mail: viebahn@cwi.nl [Centrum Wiskunde & Informatica (CWI), P.O. Box 94079, 1090 GB, Amsterdam (Netherlands); Mulder, T.E., E-mail: t.e.mulder@uu.nl [Institute for Marine and Atmospheric research Utrecht, Department of Physics and Astronomy, Utrecht University, Princetonplein 5, 3584 CC Utrecht (Netherlands); Kuehn, C., E-mail: ckuehn@ma.tum.de [Technical University of Munich, Faculty of Mathematics, Boltzmannstr. 3, 85748 Garching bei München (Germany); Wubs, F.W., E-mail: f.w.wubs@rug.nl [Johann Bernoulli Institute for Mathematics and Computer Science, University of Groningen, P.O. Box 407, 9700 AK Groningen (Netherlands); Dijkstra, H.A., E-mail: h.a.dijkstra@uu.nl [Institute for Marine and Atmospheric research Utrecht, Department of Physics and Astronomy, Utrecht University, Princetonplein 5, 3584 CC Utrecht (Netherlands); School of Chemical and Biomolecular Engineering, Cornell University, Ithaca, NY (United States)

    2017-05-01

    Techniques from numerical bifurcation theory are very useful to study transitions between steady fluid flow patterns and the instabilities involved. Here, we provide computational methodology to use parameter continuation in determining probability density functions of systems of stochastic partial differential equations near fixed points, under a small noise approximation. Key innovation is the efficient solution of a generalized Lyapunov equation using an iterative method involving low-rank approximations. We apply and illustrate the capabilities of the method using a problem in physical oceanography, i.e. the occurrence of multiple steady states of the Atlantic Ocean circulation.

  12. Spatial correlations and probability density function of the phase difference in a developed speckle-field: numerical and natural experiments

    International Nuclear Information System (INIS)

    Mysina, N Yu; Maksimova, L A; Ryabukho, V P; Gorbatenko, B B

    2015-01-01

    Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the results of numerical experiments. (laser applications and other topics in quantum electronics)

  13. Expected utility with lower probabilities

    DEFF Research Database (Denmark)

    Hendon, Ebbe; Jacobsen, Hans Jørgen; Sloth, Birgitte

    1994-01-01

    An uncertain and not just risky situation may be modeled using so-called belief functions assigning lower probabilities to subsets of outcomes. In this article we extend the von Neumann-Morgenstern expected utility theory from probability measures to belief functions. We use this theory...

  14. Cross-Sectional Relationships of Physical Activity and Sedentary Behavior With Cognitive Function in Older Adults With Probable Mild Cognitive Impairment.

    Science.gov (United States)

    Falck, Ryan S; Landry, Glenn J; Best, John R; Davis, Jennifer C; Chiu, Bryan K; Liu-Ambrose, Teresa

    2017-10-01

    Mild cognitive impairment (MCI) represents a transition between normal cognitive aging and dementia and may represent a critical time frame for promoting cognitive health through behavioral strategies. Current evidence suggests that physical activity (PA) and sedentary behavior are important for cognition. However, it is unclear whether there are differences in PA and sedentary behavior between people with probable MCI and people without MCI or whether the relationships of PA and sedentary behavior with cognitive function differ by MCI status. The aims of this study were to examine differences in PA and sedentary behavior between people with probable MCI and people without MCI and whether associations of PA and sedentary behavior with cognitive function differed by MCI status. This was a cross-sectional study. Physical activity and sedentary behavior in adults dwelling in the community (N = 151; at least 55 years old) were measured using a wrist-worn actigraphy unit. The Montreal Cognitive Assessment was used to categorize participants with probable MCI (scores of Cognitive function was indexed using the Alzheimer Disease Assessment Scale-Cognitive-Plus (ADAS-Cog Plus). Physical activity and sedentary behavior were compared based on probable MCI status, and relationships of ADAS-Cog Plus with PA and sedentary behavior were examined by probable MCI status. Participants with probable MCI (n = 82) had lower PA and higher sedentary behavior than participants without MCI (n = 69). Higher PA and lower sedentary behavior were associated with better ADAS-Cog Plus performance in participants without MCI (β = -.022 and β = .012, respectively) but not in participants with probable MCI (β cognitive function. The diagnosis of MCI was not confirmed with a physician; therefore, this study could not conclude how many of the participants categorized as having probable MCI would actually have been diagnosed with MCI by a physician. Participants with probable MCI were less active

  15. Ruin probabilities

    DEFF Research Database (Denmark)

    Asmussen, Søren; Albrecher, Hansjörg

    The book gives a comprehensive treatment of the classical and modern ruin probability theory. Some of the topics are Lundberg's inequality, the Cramér-Lundberg approximation, exact solutions, other approximations (e.g., for heavy-tailed claim size distributions), finite horizon ruin probabilities......, extensions of the classical compound Poisson model to allow for reserve-dependent premiums, Markov-modulation, periodicity, change of measure techniques, phase-type distributions as a computational vehicle and the connection to other applied probability areas, like queueing theory. In this substantially...... updated and extended second version, new topics include stochastic control, fluctuation theory for Levy processes, Gerber–Shiu functions and dependence....

  16. Multi-functional energy plantation; Multifunktionella bioenergiodlingar

    Energy Technology Data Exchange (ETDEWEB)

    Boerjesson, Paal [Lund Univ. (Sweden). Environmental and Energy Systems Studies; Berndes, Goeran; Fredriksson, Fredrik [Chalmers Univ. of Technology, Goeteborg (Sweden). Dept. of Physical Resource Theory; Kaaberger, Tomas [Ecotraffic, Goeteborg (Sweden)

    2002-02-01

    future if this problem will be valued differently. The value of increased carbon accumulation in mineral soils and reduced carbon dioxide emissions from organic soils is estimated to be equivalent to a few percent and half the production cost in conventional Salix plantations, respectively. These values may also change in the future if carbon sinks in agriculture will be included as an approved mitigation option within the Kyoto agreement. Based on an analysis of possible combinations of environmental services achieved in specific plantations, it is estimated that biomass can be produced to an negative cost in around 100,000 hectares of multi-functional energy plantations, when the value of the environmental services is included. The production cost in another 250,000 hectares of plantations is estimated to be halved. This is equivalent to around 6 and 11 TWh biomass per year, respectively. Economic incentives also exist for municipal wastewater plants for utilising vegetation filters for wastewater and sewage sludge treatment. Cadmium removal and increased soil fertility will give a minor increase in the income for the farmer. However, cadmium removal will result in increased costs later in the Salix fuel chain, due to increased costs of flue gas cleaning during combustion. Thus, to overcome this economic barrier, subsidies will probably be needed to heating plants utilising cadmium-contaminated biomass. The possibilities of achieving an income from increased soil carbon accumulation will depend on if this option will be an approved mechanism. Today, the Swedish greenhouse gas mitigation policy does not include this option. Some of the potential multi-functional energy plantations (e.g. buffer strips for reducing nutrient leaching and vegetation filters for treatment of polluted drainage water) results in increased cultivation costs for the farmer, thus increased economic barriers. Examples of measures to overcome such barriers are dedicated subsidies for multi-functional

  17. Effect of energy level sequences and neutron–proton interaction on α-particle preformation probability

    International Nuclear Information System (INIS)

    Ismail, M.; Adel, A.

    2013-01-01

    A realistic density-dependent nucleon–nucleon (NN) interaction with finite-range exchange part which produces the nuclear matter saturation curve and the energy dependence of the nucleon–nucleus optical model potential is used to calculate the preformation probability, S α , of α-decay from different isotones with neutron numbers N=124,126,128,130 and 132. We studied the variation of S α with the proton number, Z, for each isotone and found the effect of neutron and proton energy levels of parent nuclei on the behavior of the α-particle preformation probability. We found that S α increases regularly with the proton number when the proton pair in α-particle is emitted from the same level and the neutron level sequence is not changed during the Z-variation. In this case the neutron–proton (n–p) interaction of the two levels, contributing to emission process, is too small. On the contrary, if the proton or neutron level sequence is changed during the emission process, S α behaves irregularly, the irregular behavior increases if both proton and neutron levels are changed. This behavior is accompanied by change or rapid increase in the strength of n–p interaction

  18. Probability density functions for CP-violating rephasing invariants

    Science.gov (United States)

    Fortin, Jean-François; Giasson, Nicolas; Marleau, Luc

    2018-05-01

    The implications of the anarchy principle on CP violation in the lepton sector are investigated. A systematic method is introduced to compute the probability density functions for the CP-violating rephasing invariants of the PMNS matrix from the Haar measure relevant to the anarchy principle. Contrary to the CKM matrix which is hierarchical, it is shown that the Haar measure, and hence the anarchy principle, are very likely to lead to the observed PMNS matrix. Predictions on the CP-violating Dirac rephasing invariant |jD | and Majorana rephasing invariant |j1 | are also obtained. They correspond to 〈 |jD | 〉 Haar = π / 105 ≈ 0.030 and 〈 |j1 | 〉 Haar = 1 / (6 π) ≈ 0.053 respectively, in agreement with the experimental hint from T2K of | jDexp | ≈ 0.032 ± 0.005 (or ≈ 0.033 ± 0.003) for the normal (or inverted) hierarchy.

  19. Structure of states and reduced probabilities of electromagnetic transitions in 169Yb

    International Nuclear Information System (INIS)

    Bonch-Osmolovskaya, N.A.; Morozov, V.A.; Khudajberdyev, Eh.N.

    1988-01-01

    The effect of accounting the Pauli principle on the structure and energy of nonrotational states of 169 Yb deformed nucleus as well as on reduced probabilities of E2-transitions B(E2) is studied within the framework of the quasiparticle-phonon model (QPM). The amplitudes of states mixing due to Coriolis interaction and reduced probabilities of gamma transition within the framework of nonadiabatic rotation model are also calculated. The results are compared with calculations made within QPM with account of Coriolis interaction but excluding the Pauli principle in the wave state function. It is shown that to describe correctly both the level structure and reduced probabilities B(E2) it is necessary to include all types of interaction : quasiparticle interaction with phonons with account of the Pauli principle in the wave state functions and Coriolis interactions. Now no uniform theoretical approach exists

  20. Universal Probability Distribution Function for Bursty Transport in Plasma Turbulence

    International Nuclear Information System (INIS)

    Sandberg, I.; Benkadda, S.; Garbet, X.; Ropokis, G.; Hizanidis, K.; Castillo-Negrete, D. del

    2009-01-01

    Bursty transport phenomena associated with convective motion present universal statistical characteristics among different physical systems. In this Letter, a stochastic univariate model and the associated probability distribution function for the description of bursty transport in plasma turbulence is presented. The proposed stochastic process recovers the universal distribution of density fluctuations observed in plasma edge of several magnetic confinement devices and the remarkable scaling between their skewness S and kurtosis K. Similar statistical characteristics of variabilities have been also observed in other physical systems that are characterized by convection such as the x-ray fluctuations emitted by the Cygnus X-1 accretion disc plasmas and the sea surface temperature fluctuations.

  1. Foundations of probability

    International Nuclear Information System (INIS)

    Fraassen, B.C. van

    1979-01-01

    The interpretation of probabilities in physical theories are considered, whether quantum or classical. The following points are discussed 1) the functions P(μ, Q) in terms of which states and propositions can be represented, are classical (Kolmogoroff) probabilities, formally speaking, 2) these probabilities are generally interpreted as themselves conditional, and the conditions are mutually incompatible where the observables are maximal and 3) testing of the theory typically takes the form of confronting the expectation values of observable Q calculated with probability measures P(μ, Q) for states μ; hence, of comparing the probabilities P(μ, Q)(E) with the frequencies of occurrence of the corresponding events. It seems that even the interpretation of quantum mechanics, in so far as it concerns what the theory says about the empirical (i.e. actual, observable) phenomena, deals with the confrontation of classical probability measures with observable frequencies. This confrontation is studied. (Auth./C.F.)

  2. Exact joint density-current probability function for the asymmetric exclusion process.

    Science.gov (United States)

    Depken, Martin; Stinchcombe, Robin

    2004-07-23

    We study the asymmetric simple exclusion process with open boundaries and derive the exact form of the joint probability function for the occupation number and the current through the system. We further consider the thermodynamic limit, showing that the resulting distribution is non-Gaussian and that the density fluctuations have a discontinuity at the continuous phase transition, while the current fluctuations are continuous. The derivations are performed by using the standard operator algebraic approach and by the introduction of new operators satisfying a modified version of the original algebra. Copyright 2004 The American Physical Society

  3. Probability-1

    CERN Document Server

    Shiryaev, Albert N

    2016-01-01

    This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, the measure-theoretic foundations of probability theory, weak convergence of probability measures, and the central limit theorem. Many examples are discussed in detail, and there are a large number of exercises. The book is accessible to advanced undergraduates and can be used as a text for independent study. To accommodate the greatly expanded material in the third edition of Probability, the book is now divided into two volumes. This first volume contains updated references and substantial revisions of the first three chapters of the second edition. In particular, new material has been added on generating functions, the inclusion-exclusion principle, theorems on monotonic classes (relying on a detailed treatment of “π-λ” systems), and the fundamental theorems of mathematical statistics.

  4. Fine-structure energy levels, oscillator strengths and transition probabilities in Ni XVI

    International Nuclear Information System (INIS)

    Deb, N.C.; Msezane, A.Z.

    2001-01-01

    Fine-structure energy levels relative to the ground state, oscillator strengths and transition probabilities for transitions among the lowest 40 fine-structure levels belonging to the configurations 3s 2 3p, 3s3p 2 , 3s 2 3d, 3p 3 and 3s3p3d of Ni XVI are calculated using a large scale CI in program CIV3 of Hibbert. Relativistic effects are included through the Breit-Pauli approximation via spin-orbit, spin-other-orbit, spin-spin, Darwin and mass correction terms. The existing discrepancies between the calculated and measured values for many of the relative energy positions are resolved in the present calculation which yields excellent agreement with measurement. Also, many of our oscillator strengths for allowed and intercombination transitions are in very good agreement with the recommended data by the National Institute of Standard and Technology (NIST). (orig.)

  5. Random function representation of stationary stochastic vector processes for probability density evolution analysis of wind-induced structures

    Science.gov (United States)

    Liu, Zhangjun; Liu, Zenghui

    2018-06-01

    This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.

  6. High throughput nonparametric probability density estimation.

    Science.gov (United States)

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  7. Zero field reversal probability in thermally assisted magnetization reversal

    Science.gov (United States)

    Prasetya, E. B.; Utari; Purnama, B.

    2017-11-01

    This paper discussed about zero field reversal probability in thermally assisted magnetization reversal (TAMR). Appearance of reversal probability in zero field investigated through micromagnetic simulation by solving stochastic Landau-Lifshitz-Gibert (LLG). The perpendicularly anisotropy magnetic dot of 50×50×20 nm3 is considered as single cell magnetic storage of magnetic random acces memory (MRAM). Thermally assisted magnetization reversal was performed by cooling writing process from near/almost Curie point to room temperature on 20 times runs for different randomly magnetized state. The results show that the probability reversal under zero magnetic field decreased with the increase of the energy barrier. The zero-field probability switching of 55% attained for energy barrier of 60 k B T and the reversal probability become zero noted at energy barrier of 2348 k B T. The higest zero-field switching probability of 55% attained for energy barrier of 60 k B T which corespond to magnetif field of 150 Oe for switching.

  8. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    Science.gov (United States)

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  9. Wind energy potential in Bulgaria

    International Nuclear Information System (INIS)

    Shtrakov, Stanko Vl.

    2009-01-01

    In this study, wind characteristic and wind energy potential in Bulgaria were analyzed using the wind speed data. The wind energy potential at different sites in Bulgaria has been investigated by compiling data from different sources and analyzing it using a software tool. The wind speed distribution curves were obtained by using the Weibull and Rayleigh probability density functions. The results relating to wind energy potential are given in terms of the monthly average wind speed, wind speed probability density function (PDF), wind speed cumulative density function (CDF), and wind speed duration curve. A technical and economic assessment has been made of electricity generation from three wind turbines having capacity of (60, 200, and 500 kW). The yearly energy output capacity factor and the electrical energy cost of kWh produced by the three different turbines were calculated

  10. Probability of satellite collision

    Science.gov (United States)

    Mccarter, J. W.

    1972-01-01

    A method is presented for computing the probability of a collision between a particular artificial earth satellite and any one of the total population of earth satellites. The collision hazard incurred by the proposed modular Space Station is assessed using the technique presented. The results of a parametric study to determine what type of satellite orbits produce the greatest contribution to the total collision probability are presented. Collision probability for the Space Station is given as a function of Space Station altitude and inclination. Collision probability was also parameterized over miss distance and mission duration.

  11. Real analysis and probability

    CERN Document Server

    Ash, Robert B; Lukacs, E

    1972-01-01

    Real Analysis and Probability provides the background in real analysis needed for the study of probability. Topics covered range from measure and integration theory to functional analysis and basic concepts of probability. The interplay between measure theory and topology is also discussed, along with conditional probability and expectation, the central limit theorem, and strong laws of large numbers with respect to martingale theory.Comprised of eight chapters, this volume begins with an overview of the basic concepts of the theory of measure and integration, followed by a presentation of var

  12. Electron-trapping probability in natural dosemeters as a function of irradiation temperature

    DEFF Research Database (Denmark)

    Wallinga, J.; Murray, A.S.; Wintle, A.G.

    2002-01-01

    The electron-trapping probability in OSL traps as a function of irradiation temperature is investigated for sedimentary quartz and feldspar. A dependency was found for both minerals; this phenomenon could give rise to errors in dose estimation when the irradiation temperature used in laboratory...... procedures is different from that in the natural environment. No evidence was found for the existence of shallow trap saturation effects that Could give rise to a dose-rate dependency of electron trapping....

  13. Dictionary-Based Stochastic Expectation–Maximization for SAR Amplitude Probability Density Function Estimation

    OpenAIRE

    Moser , Gabriele; Zerubia , Josiane; Serpico , Sebastiano B.

    2006-01-01

    International audience; In remotely sensed data analysis, a crucial problem is represented by the need to develop accurate models for the statistics of the pixel intensities. This paper deals with the problem of probability density function (pdf) estimation in the context of synthetic aperture radar (SAR) amplitude data analysis. Several theoretical and heuristic models for the pdfs of SAR data have been proposed in the literature, which have been proved to be effective for different land-cov...

  14. Green close-quote s function method with energy-independent vertex functions

    International Nuclear Information System (INIS)

    Tsay Tzeng, S.Y.; Kuo, T.T.; Tzeng, Y.; Geyer, H.B.; Navratil, P.

    1996-01-01

    In conventional Green close-quote s function methods the vertex function Γ is generally energy dependent. However, a model-space Green close-quote s function method where the vertex function is manifestly energy independent can be formulated using energy-independent effective interaction theories based on folded diagrams and/or similarity transformations. This is discussed in general and then illustrated for a 1p1h model-space Green close-quote s function applied to a solvable Lipkin many-fermion model. The poles of the conventional Green close-quote s function are obtained by solving a self-consistent Dyson equation and model space calculations may lead to unphysical poles. For the energy-independent model-space Green close-quote s function only the physical poles of the model problem are reproduced and are in satisfactory agreement with the exact excitation energies. copyright 1996 The American Physical Society

  15. Audio Query by Example Using Similarity Measures between Probability Density Functions of Features

    Directory of Open Access Journals (Sweden)

    Marko Helén

    2010-01-01

    Full Text Available This paper proposes a query by example system for generic audio. We estimate the similarity of the example signal and the samples in the queried database by calculating the distance between the probability density functions (pdfs of their frame-wise acoustic features. Since the features are continuous valued, we propose to model them using Gaussian mixture models (GMMs or hidden Markov models (HMMs. The models parametrize each sample efficiently and retain sufficient information for similarity measurement. To measure the distance between the models, we apply a novel Euclidean distance, approximations of Kullback-Leibler divergence, and a cross-likelihood ratio test. The performance of the measures was tested in simulations where audio samples are automatically retrieved from a general audio database, based on the estimated similarity to a user-provided example. The simulations show that the distance between probability density functions is an accurate measure for similarity. Measures based on GMMs or HMMs are shown to produce better results than that of the existing methods based on simpler statistics or histograms of the features. A good performance with low computational cost is obtained with the proposed Euclidean distance.

  16. Estimating the population size and colony boundary of subterranean termites by using the density functions of directionally averaged capture probability.

    Science.gov (United States)

    Su, Nan-Yao; Lee, Sang-Hee

    2008-04-01

    Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.

  17. Orthogonal Algorithm of Logic Probability and Syndrome-Testable Analysis

    Institute of Scientific and Technical Information of China (English)

    1990-01-01

    A new method,orthogonal algoritm,is presented to compute the logic probabilities(i.e.signal probabilities)accurately,The transfer properties of logic probabilities are studied first,which are useful for the calculation of logic probability of the circuit with random independent inputs.Then the orthogonal algoritm is described to compute the logic probability of Boolean function realized by a combinational circuit.This algorithm can make Boolean function “ORTHOGONAL”so that the logic probabilities can be easily calculated by summing up the logic probabilities of all orthogonal terms of the Booleam function.

  18. Probability and stochastic modeling

    CERN Document Server

    Rotar, Vladimir I

    2012-01-01

    Basic NotionsSample Space and EventsProbabilitiesCounting TechniquesIndependence and Conditional ProbabilityIndependenceConditioningThe Borel-Cantelli TheoremDiscrete Random VariablesRandom Variables and VectorsExpected ValueVariance and Other Moments. Inequalities for DeviationsSome Basic DistributionsConvergence of Random Variables. The Law of Large NumbersConditional ExpectationGenerating Functions. Branching Processes. Random Walk RevisitedBranching Processes Generating Functions Branching Processes Revisited More on Random WalkMarkov ChainsDefinitions and Examples. Probability Distributions of Markov ChainsThe First Step Analysis. Passage TimesVariables Defined on a Markov ChainErgodicity and Stationary DistributionsA Classification of States and ErgodicityContinuous Random VariablesContinuous DistributionsSome Basic Distributions Continuous Multivariate Distributions Sums of Independent Random Variables Conditional Distributions and ExpectationsDistributions in the General Case. SimulationDistribution F...

  19. I. Fission Probabilities, Fission Barriers, and Shell Effects. II. Particle Structure Functions

    Energy Technology Data Exchange (ETDEWEB)

    Jing, Kexing [Univ. of California, Berkeley, CA (United States)

    1999-05-01

    In Part I, fission excitation functions of osmium isotopes 185,186, 187, 189 Os produced in 3He +182,183, 184, 186W reactions, and of polonium isotopes 209,210, 211, 212Po produced in 3He/4He + 206, 207, 208Pb reactions, were measured with high precision. These excitation functions have been analyzed in detail based upon the transition state formalism. The fission barriers, and shell effects for the corresponding nuclei are extracted from the detailed analyses. A novel approach has been developed to determine upper limits of the transient time of the fission process. The upper limits are constrained by the fission probabilities of neighboring isotopes. The upper limits for the transient time set with this new method are 15x 10–21 sec and 25x 10–21 sec for 0s and Po compound nuclei, respectively. In Part II, we report on a search for evidence of the optical modulations in the energy spectra of alpha particles emitted from hot compound nuclei. The optical modulations are expected to arise from the ~-particle interaction with the rest of the nucleus as the particle prepares to exit. Some evidence for the modulations has been observed in the alpha spectra measured in the 3He-induced reactions, 3He + natAg in particular. The identification of the modulations involves a technique that subtracts the bulk statistical background from the measured alpha spectra, in order for the modulations to become visible in the residuals. Due to insufficient knowledge of the background spectra, however, the presented evidence should only be regarded as preliminary and tentative.

  20. Calculation of probability density functions for temperature and precipitation change under global warming

    International Nuclear Information System (INIS)

    Watterson, Ian G.

    2007-01-01

    Full text: he IPCC Fourth Assessment Report (Meehl ef al. 2007) presents multi-model means of the CMIP3 simulations as projections of the global climate change over the 21st century under several SRES emission scenarios. To assess the possible range of change for Australia based on the CMIP3 ensemble, we can follow Whetton etal. (2005) and use the 'pattern scaling' approach, which separates the uncertainty in the global mean warming from that in the local change per degree of warming. This study presents several ways of representing these two factors as probability density functions (PDFs). The beta distribution, a smooth, bounded, function allowing skewness, is found to provide a useful representation of the range of CMIP3 results. A weighting of models based on their skill in simulating seasonal means in the present climate over Australia is included. Dessai ef al. (2005) and others have used Monte-Carlo sampling to recombine such global warming and scaled change factors into values of net change. Here, we use a direct integration of the product across the joint probability space defined by the two PDFs. The result is a cumulative distribution function (CDF) for change, for each variable, location, and season. The median of this distribution provides a best estimate of change, while the 10th and 90th percentiles represent a likely range. The probability of exceeding a specified threshold can also be extracted from the CDF. The presentation focuses on changes in Australian temperature and precipitation at 2070 under the A1B scenario. However, the assumption of linearity behind pattern scaling allows results for different scenarios and times to be simply obtained. In the case of precipitation, which must remain non-negative, a simple modification of the calculations (based on decreases being exponential with warming) is used to avoid unrealistic results. These approaches are currently being used for the new CSIRO/ Bureau of Meteorology climate projections

  1. Risk Probabilities

    DEFF Research Database (Denmark)

    Rojas-Nandayapa, Leonardo

    Tail probabilities of sums of heavy-tailed random variables are of a major importance in various branches of Applied Probability, such as Risk Theory, Queueing Theory, Financial Management, and are subject to intense research nowadays. To understand their relevance one just needs to think...... analytic expression for the distribution function of a sum of random variables. The presence of heavy-tailed random variables complicates the problem even more. The objective of this dissertation is to provide better approximations by means of sharp asymptotic expressions and Monte Carlo estimators...

  2. A new formulation of the probability density function in random walk models for atmospheric dispersion

    DEFF Research Database (Denmark)

    Falk, Anne Katrine Vinther; Gryning, Sven-Erik

    1997-01-01

    In this model for atmospheric dispersion particles are simulated by the Langevin Equation, which is a stochastic differential equation. It uses the probability density function (PDF) of the vertical velocity fluctuations as input. The PDF is constructed as an expansion after Hermite polynomials...

  3. Nonlocal kinetic-energy-density functionals

    International Nuclear Information System (INIS)

    Garcia-Gonzalez, P.; Alvarellos, J.E.; Chacon, E.

    1996-01-01

    In this paper we present nonlocal kinetic-energy functionals T[n] within the average density approximation (ADA) framework, which do not require any extra input when applied to any electron system and recover the exact kinetic energy and the linear response function of a homogeneous system. In contrast with previous ADA functionals, these present good behavior of the long-range tail of the exact weight function. The averaging procedure for the kinetic functional (averaging the Fermi momentum of the electron gas, instead of averaging the electron density) leads to a functional without numerical difficulties in the calculation of extended systems, and it gives excellent results when applied to atoms and jellium surfaces. copyright 1996 The American Physical Society

  4. An investigation of student understanding of classical ideas related to quantum mechanics: Potential energy diagrams and spatial probability density

    Science.gov (United States)

    Stephanik, Brian Michael

    This dissertation describes the results of two related investigations into introductory student understanding of ideas from classical physics that are key elements of quantum mechanics. One investigation probes the extent to which students are able to interpret and apply potential energy diagrams (i.e., graphs of potential energy versus position). The other probes the extent to which students are able to reason classically about probability and spatial probability density. The results of these investigations revealed significant conceptual and reasoning difficulties that students encounter with these topics. The findings guided the design of instructional materials to address the major problems. Results from post-instructional assessments are presented that illustrate the impact of the curricula on student learning.

  5. Probability calculus of fractional order and fractional Taylor's series application to Fokker-Planck equation and information of non-random functions

    International Nuclear Information System (INIS)

    Jumarie, Guy

    2009-01-01

    A probability distribution of fractional (or fractal) order is defined by the measure μ{dx} = p(x)(dx) α , 0 α (D x α h α )f(x) provided by the modified Riemann Liouville definition, one can expand a probability calculus parallel to the standard one. A Fourier's transform of fractional order using the Mittag-Leffler function is introduced, together with its inversion formula; and it provides a suitable generalization of the characteristic function of fractal random variables. It appears that the state moments of fractional order are more especially relevant. The main properties of this fractional probability calculus are outlined, it is shown that it provides a sound approach to Fokker-Planck equation which are fractional in both space and time, and it provides new results in the information theory of non-random functions.

  6. Surprisal analysis and probability matrices for rotational energy transfer

    International Nuclear Information System (INIS)

    Levine, R.D.; Bernstein, R.B.; Kahana, P.; Procaccia, I.; Upchurch, E.T.

    1976-01-01

    The information-theoretic approach is applied to the analysis of state-to-state rotational energy transfer cross sections. The rotational surprisal is evaluated in the usual way, in terms of the deviance of the cross sections from their reference (''prior'') values. The surprisal is found to be an essentially linear function of the energy transferred. This behavior accounts for the experimentally observed exponential gap law for the hydrogen halide systems. The data base here analyzed (taken from the literature) is largely computational in origin: quantal calculations for the hydrogenic systems H 2 +H, He, Li + ; HD+He; D 2 +H and for the N 2 +Ar system; and classical trajectory results for H 2 +Li + ; D 2 +Li + and N 2 +Ar. The surprisal analysis not only serves to compact a large body of data but also aids in the interpretation of the results. A single surprisal parameter theta/subR/ suffices to account for the (relative) magnitude of all state-to-state inelastic cross sections at a given energy

  7. A brief introduction to probability.

    Science.gov (United States)

    Di Paola, Gioacchino; Bertani, Alessandro; De Monte, Lavinia; Tuzzolino, Fabio

    2018-02-01

    The theory of probability has been debated for centuries: back in 1600, French mathematics used the rules of probability to place and win bets. Subsequently, the knowledge of probability has significantly evolved and is now an essential tool for statistics. In this paper, the basic theoretical principles of probability will be reviewed, with the aim of facilitating the comprehension of statistical inference. After a brief general introduction on probability, we will review the concept of the "probability distribution" that is a function providing the probabilities of occurrence of different possible outcomes of a categorical or continuous variable. Specific attention will be focused on normal distribution that is the most relevant distribution applied to statistical analysis.

  8. Microscopically Based Nuclear Energy Functionals

    International Nuclear Information System (INIS)

    Bogner, S. K.

    2009-01-01

    A major goal of the SciDAC project 'Building a Universal Nuclear Energy Density Functional' is to develop next-generation nuclear energy density functionals that give controlled extrapolations away from stability with improved performance across the mass table. One strategy is to identify missing physics in phenomenological Skyrme functionals based on our understanding of the underlying internucleon interactions and microscopic many-body theory. In this contribution, I describe ongoing efforts to use the density matrix expansion of Negele and Vautherin to incorporate missing finite-range effects from the underlying two- and three-nucleon interactions into phenomenological Skyrme functionals.

  9. Cytoarchitecture, probability maps and functions of the human frontal pole.

    Science.gov (United States)

    Bludau, S; Eickhoff, S B; Mohlberg, H; Caspers, S; Laird, A R; Fox, P T; Schleicher, A; Zilles, K; Amunts, K

    2014-06-01

    The frontal pole has more expanded than any other part in the human brain as compared to our ancestors. It plays an important role for specifically human behavior and cognitive abilities, e.g. action selection (Kovach et al., 2012). Evidence about divergent functions of its medial and lateral part has been provided, both in the healthy brain and in psychiatric disorders. The anatomical correlates of such functional segregation, however, are still unknown due to a lack of stereotaxic, microstructural maps obtained in a representative sample of brains. Here we show that the human frontopolar cortex consists of two cytoarchitectonically and functionally distinct areas: lateral frontopolar area 1 (Fp1) and medial frontopolar area 2 (Fp2). Based on observer-independent mapping in serial, cell-body stained sections of 10 brains, three-dimensional, probabilistic maps of areas Fp1 and Fp2 were created. They show, for each position of the reference space, the probability with which each area was found in a particular voxel. Applying these maps as seed regions for a meta-analysis revealed that Fp1 and Fp2 differentially contribute to functional networks: Fp1 was involved in cognition, working memory and perception, whereas Fp2 was part of brain networks underlying affective processing and social cognition. The present study thus disclosed cortical correlates of a functional segregation of the human frontopolar cortex. The probabilistic maps provide a sound anatomical basis for interpreting neuroimaging data in the living human brain, and open new perspectives for analyzing structure-function relationships in the prefrontal cortex. The new data will also serve as a starting point for further comparative studies between human and non-human primate brains. This allows finding similarities and differences in the organizational principles of the frontal lobe during evolution as neurobiological basis for our behavior and cognitive abilities. Copyright © 2013 Elsevier Inc. All

  10. Wigner distribution function for an oscillator

    International Nuclear Information System (INIS)

    Davies, R.W.; Davies, K.T.R.

    1975-01-01

    We present two new derivations of the Wigner distribution function for a simple harmonic oscillator Hamiltonian. Both methods are facilitated using a formula which expresses the Wigner function as a simple trace. The first method of derivation utilizes a modification of a theorem due to Messiah. An alternative procedure makes use of the coherent state representation of an oscillator. The Wigner distribution function gives a semiclassical joint probability for finding the system with given coordinates and momenta, and the joint probability is factorable for the special case of an oscillator. An important application of this result occurs in the theory of nuclear fission for calculating the probability distributions for the masses, kinetic energies, and vibrational energies of the fission fragments at infinite separation. (U.S.)

  11. Know the risk, take the win: how executive functions and probability processing influence advantageous decision making under risk conditions.

    Science.gov (United States)

    Brand, Matthias; Schiebener, Johannes; Pertl, Marie-Theres; Delazer, Margarete

    2014-01-01

    Recent models on decision making under risk conditions have suggested that numerical abilities are important ingredients of advantageous decision-making performance, but empirical evidence is still limited. The results of our first study show that logical reasoning and basic mental calculation capacities predict ratio processing and that ratio processing predicts decision making under risk. In the second study, logical reasoning together with executive functions predicted probability processing (numeracy and probability knowledge), and probability processing predicted decision making under risk. These findings suggest that increasing an individual's understanding of ratios and probabilities should lead to more advantageous decisions under risk conditions.

  12. Probability of detection as a function of multiple influencing parameters

    Energy Technology Data Exchange (ETDEWEB)

    Pavlovic, Mato

    2014-10-15

    Non-destructive testing is subject to measurement uncertainties. In safety critical applications the reliability assessment of its capability to detect flaws is therefore necessary. In most applications, the flaw size is the single most important parameter that influences the probability of detection (POD) of the flaw. That is why the POD is typically calculated and expressed as a function of the flaw size. The capability of the inspection system to detect flaws is established by comparing the size of reliably detected flaw with the size of the flaw that is critical for the structural integrity. Applications where several factors have an important influence on the POD are investigated in this dissertation. To devise a reliable estimation of the NDT system capability it is necessary to express the POD as a function of all these factors. A multi-parameter POD model is developed. It enables POD to be calculated and expressed as a function of several influencing parameters. The model was tested on the data from the ultrasonic inspection of copper and cast iron components with artificial flaws. Also, a technique to spatially present POD data called the volume POD is developed. The fusion of the POD data coming from multiple inspections of the same component with different sensors is performed to reach the overall POD of the inspection system.

  13. Probability of detection as a function of multiple influencing parameters

    International Nuclear Information System (INIS)

    Pavlovic, Mato

    2014-01-01

    Non-destructive testing is subject to measurement uncertainties. In safety critical applications the reliability assessment of its capability to detect flaws is therefore necessary. In most applications, the flaw size is the single most important parameter that influences the probability of detection (POD) of the flaw. That is why the POD is typically calculated and expressed as a function of the flaw size. The capability of the inspection system to detect flaws is established by comparing the size of reliably detected flaw with the size of the flaw that is critical for the structural integrity. Applications where several factors have an important influence on the POD are investigated in this dissertation. To devise a reliable estimation of the NDT system capability it is necessary to express the POD as a function of all these factors. A multi-parameter POD model is developed. It enables POD to be calculated and expressed as a function of several influencing parameters. The model was tested on the data from the ultrasonic inspection of copper and cast iron components with artificial flaws. Also, a technique to spatially present POD data called the volume POD is developed. The fusion of the POD data coming from multiple inspections of the same component with different sensors is performed to reach the overall POD of the inspection system.

  14. Probabilities and energies to obtain the counting efficiency of electron-capture nuclides, KLMN model

    International Nuclear Information System (INIS)

    Casas Galiano, G.; Grau Malonda, A.

    1994-01-01

    An intelligent computer program has been developed to obtain the mathematical formulae to compute the probabilities and reduced energies of the different atomic rearrangement pathways following electron-capture decay. Creation and annihilation operators for Auger and X processes have been introduced. Taking into account the symmetries associated with each process, 262 different pathways were obtained. This model allows us to obtain the influence of the M-electron-capture in the counting efficiency when the atomic number of the nuclide is high

  15. Probabilities and energies to obtain the counting efficiency of electron-capture nuclides. KLMN model

    International Nuclear Information System (INIS)

    Galiano, G.; Grau, A.

    1994-01-01

    An intelligent computer program has been developed to obtain the mathematical formulae to compute the probabilities and reduced energies of the different atomic rearrangement pathways following electron-capture decay. Creation and annihilation operators for Auger and X processes have been introduced. Taking into account the symmetries associated with each process, 262 different pathways were obtained. This model allows us to obtain the influence of the M-electro capture in the counting efficiency when the atomic number of the nuclide is high. (Author)

  16. Wave functions and two-electron probability distributions of the Hooke's-law atom and helium

    International Nuclear Information System (INIS)

    O'Neill, Darragh P.; Gill, Peter M. W.

    2003-01-01

    The Hooke's-law atom (hookium) provides an exactly soluble model for a two-electron atom in which the nuclear-electron Coulombic attraction has been replaced by a harmonic one. Starting from the known exact position-space wave function for the ground state of hookium, we present the momentum-space wave function. We also look at the intracules, two-electron probability distributions, for hookium in position, momentum, and phase space. These are compared with the Hartree-Fock results and the Coulomb holes (the difference between the exact and Hartree-Fock intracules) in position, momentum, and phase space are examined. We then compare these results with analogous results for the ground state of helium using a simple, explicitly correlated wave function

  17. Using Prediction Markets to Generate Probability Density Functions for Climate Change Risk Assessment

    Science.gov (United States)

    Boslough, M.

    2011-12-01

    Climate-related uncertainty is traditionally presented as an error bar, but it is becoming increasingly common to express it in terms of a probability density function (PDF). PDFs are a necessary component of probabilistic risk assessments, for which simple "best estimate" values are insufficient. Many groups have generated PDFs for climate sensitivity using a variety of methods. These PDFs are broadly consistent, but vary significantly in their details. One axiom of the verification and validation community is, "codes don't make predictions, people make predictions." This is a statement of the fact that subject domain experts generate results using assumptions within a range of epistemic uncertainty and interpret them according to their expert opinion. Different experts with different methods will arrive at different PDFs. For effective decision support, a single consensus PDF would be useful. We suggest that market methods can be used to aggregate an ensemble of opinions into a single distribution that expresses the consensus. Prediction markets have been shown to be highly successful at forecasting the outcome of events ranging from elections to box office returns. In prediction markets, traders can take a position on whether some future event will or will not occur. These positions are expressed as contracts that are traded in a double-action market that aggregates price, which can be interpreted as a consensus probability that the event will take place. Since climate sensitivity cannot directly be measured, it cannot be predicted. However, the changes in global mean surface temperature are a direct consequence of climate sensitivity, changes in forcing, and internal variability. Viable prediction markets require an undisputed event outcome on a specific date. Climate-related markets exist on Intrade.com, an online trading exchange. One such contract is titled "Global Temperature Anomaly for Dec 2011 to be greater than 0.65 Degrees C." Settlement is based

  18. Economic choices reveal probability distortion in macaque monkeys.

    Science.gov (United States)

    Stauffer, William R; Lak, Armin; Bossaerts, Peter; Schultz, Wolfram

    2015-02-18

    Economic choices are largely determined by two principal elements, reward value (utility) and probability. Although nonlinear utility functions have been acknowledged for centuries, nonlinear probability weighting (probability distortion) was only recently recognized as a ubiquitous aspect of real-world choice behavior. Even when outcome probabilities are known and acknowledged, human decision makers often overweight low probability outcomes and underweight high probability outcomes. Whereas recent studies measured utility functions and their corresponding neural correlates in monkeys, it is not known whether monkeys distort probability in a manner similar to humans. Therefore, we investigated economic choices in macaque monkeys for evidence of probability distortion. We trained two monkeys to predict reward from probabilistic gambles with constant outcome values (0.5 ml or nothing). The probability of winning was conveyed using explicit visual cues (sector stimuli). Choices between the gambles revealed that the monkeys used the explicit probability information to make meaningful decisions. Using these cues, we measured probability distortion from choices between the gambles and safe rewards. Parametric modeling of the choices revealed classic probability weighting functions with inverted-S shape. Therefore, the animals overweighted low probability rewards and underweighted high probability rewards. Empirical investigation of the behavior verified that the choices were best explained by a combination of nonlinear value and nonlinear probability distortion. Together, these results suggest that probability distortion may reflect evolutionarily preserved neuronal processing. Copyright © 2015 Stauffer et al.

  19. Probability elements of the mathematical theory

    CERN Document Server

    Heathcote, C R

    2000-01-01

    Designed for students studying mathematical statistics and probability after completing a course in calculus and real variables, this text deals with basic notions of probability spaces, random variables, distribution functions and generating functions, as well as joint distributions and the convergence properties of sequences of random variables. Includes worked examples and over 250 exercises with solutions.

  20. Wind power statistics and an evaluation of wind energy density

    Energy Technology Data Exchange (ETDEWEB)

    Jamil, M.; Parsa, S.; Majidi, M. [Materials and Energy Research Centre, Tehran (Iran, Islamic Republic of)

    1995-11-01

    In this paper the statistical data of fifty days` wind speed measurements at the MERC- solar site are used to find out the wind energy density and other wind characteristics with the help of the Weibull probability distribution function. It is emphasized that the Weibull and Rayleigh probability functions are useful tools for wind energy density estimation but are not quite appropriate for properly fitting the actual wind data of low mean speed, short-time records. One has to use either the actual wind data (histogram) or look for a better fit by other models of the probability function. (Author)

  1. Skin damage probabilities using fixation materials in high-energy photon beams

    International Nuclear Information System (INIS)

    Carl, J.; Vestergaard, A.

    2000-01-01

    Patient fixation, such as thermoplastic masks, carbon-fibre support plates and polystyrene bead vacuum cradles, is used to reproduce patient positioning in radiotherapy. Consequently low-density materials may be introduced in high-energy photon beams. The aim of the this study was to measure the increase in skin dose when low-density materials are present and calculate the radiobiological consequences in terms of probabilities of early and late skin damage. An experimental thin-windowed plane-parallel ion chamber was used. Skin doses were measured using various overlaying low-density fixation materials. A fixed geometry of a 10 x 10 cm field, a SSD = 100 cm and photon energies of 4, 6 and 10 MV on Varian Clinac 2100C accelerators were used for all measurements. Radiobiological consequences of introducing these materials into the high-energy photon beams were evaluated in terms of early and late damage of the skin based on the measured surface doses and the LQ-model. The experimental ion chamber save results consistent with other studies. A relationship between skin dose and material thickness in mg/cm 2 was established and used to calculate skin doses in scenarios assuming radiotherapy treatment with opposed fields. Conventional radiotherapy may apply mid-point doses up to 60-66 Gy in daily 2-Gy fractions opposed fields. Using thermoplastic fixation and high-energy photons as low as 4 MV do increase the dose to the skin considerably. However, using thermoplastic materials with thickness less than 100 mg/cm 2 skin doses are comparable with those produced by variation in source to skin distance, field size or blocking trays within clinical treatment set-ups. The use of polystyrene cradles and carbon-fibre materials with thickness less than 100 mg/cm 2 should be avoided at 4 MV at doses above 54-60 Gy. (author)

  2. Determination of probability density functions for parameters in the Munson-Dawson model for creep behavior of salt

    International Nuclear Information System (INIS)

    Pfeifle, T.W.; Mellegard, K.D.; Munson, D.E.

    1992-10-01

    The modified Munson-Dawson (M-D) constitutive model that describes the creep behavior of salt will be used in performance assessment calculations to assess compliance of the Waste Isolation Pilot Plant (WIPP) facility with requirements governing the disposal of nuclear waste. One of these standards requires that the uncertainty of future states of the system, material model parameters, and data be addressed in the performance assessment models. This paper presents a method in which measurement uncertainty and the inherent variability of the material are characterized by treating the M-D model parameters as random variables. The random variables can be described by appropriate probability distribution functions which then can be used in Monte Carlo or structural reliability analyses. Estimates of three random variables in the M-D model were obtained by fitting a scalar form of the model to triaxial compression creep data generated from tests of WIPP salt. Candidate probability distribution functions for each of the variables were then fitted to the estimates and their relative goodness-of-fit tested using the Kolmogorov-Smirnov statistic. A sophisticated statistical software package obtained from BMDP Statistical Software, Inc. was used in the M-D model fitting. A separate software package, STATGRAPHICS, was used in fitting the candidate probability distribution functions to estimates of the variables. Skewed distributions, i.e., lognormal and Weibull, were found to be appropriate for the random variables analyzed

  3. Transitional Probabilities Are Prioritized over Stimulus/Pattern Probabilities in Auditory Deviance Detection: Memory Basis for Predictive Sound Processing.

    Science.gov (United States)

    Mittag, Maria; Takegata, Rika; Winkler, István

    2016-09-14

    Representations encoding the probabilities of auditory events do not directly support predictive processing. In contrast, information about the probability with which a given sound follows another (transitional probability) allows predictions of upcoming sounds. We tested whether behavioral and cortical auditory deviance detection (the latter indexed by the mismatch negativity event-related potential) relies on probabilities of sound patterns or on transitional probabilities. We presented healthy adult volunteers with three types of rare tone-triplets among frequent standard triplets of high-low-high (H-L-H) or L-H-L pitch structure: proximity deviant (H-H-H/L-L-L), reversal deviant (L-H-L/H-L-H), and first-tone deviant (L-L-H/H-H-L). If deviance detection was based on pattern probability, reversal and first-tone deviants should be detected with similar latency because both differ from the standard at the first pattern position. If deviance detection was based on transitional probabilities, then reversal deviants should be the most difficult to detect because, unlike the other two deviants, they contain no low-probability pitch transitions. The data clearly showed that both behavioral and cortical auditory deviance detection uses transitional probabilities. Thus, the memory traces underlying cortical deviance detection may provide a link between stimulus probability-based change/novelty detectors operating at lower levels of the auditory system and higher auditory cognitive functions that involve predictive processing. Our research presents the first definite evidence for the auditory system prioritizing transitional probabilities over probabilities of individual sensory events. Forming representations for transitional probabilities paves the way for predictions of upcoming sounds. Several recent theories suggest that predictive processing provides the general basis of human perception, including important auditory functions, such as auditory scene analysis. Our

  4. Parametric Probability Distribution Functions for Axon Diameters of Corpus Callosum

    Directory of Open Access Journals (Sweden)

    Farshid eSepehrband

    2016-05-01

    Full Text Available Axon diameter is an important neuroanatomical characteristic of the nervous system that alters in the course of neurological disorders such as multiple sclerosis. Axon diameters vary, even within a fiber bundle, and are not normally distributed. An accurate distribution function is therefore beneficial, either to describe axon diameters that are obtained from a direct measurement technique (e.g., microscopy, or to infer them indirectly (e.g., using diffusion-weighted MRI. The gamma distribution is a common choice for this purpose (particularly for the inferential approach because it resembles the distribution profile of measured axon diameters which has been consistently shown to be non-negative and right-skewed. In this study we compared a wide range of parametric probability distribution functions against empirical data obtained from electron microscopy images. We observed that the gamma distribution fails to accurately describe the main characteristics of the axon diameter distribution, such as location and scale of the mode and the profile of distribution tails. We also found that the generalized extreme value distribution consistently fitted the measured distribution better than other distribution functions. This suggests that there may be distinct subpopulations of axons in the corpus callosum, each with their own distribution profiles. In addition, we observed that several other distributions outperformed the gamma distribution, yet had the same number of unknown parameters; these were the inverse Gaussian, log normal, log logistic and Birnbaum-Saunders distributions.

  5. Estimation of the four-wave mixing noise probability-density function by the multicanonical Monte Carlo method.

    Science.gov (United States)

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2005-01-01

    The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.

  6. Upgrading Probability via Fractions of Events

    Directory of Open Access Journals (Sweden)

    Frič Roman

    2016-08-01

    Full Text Available The influence of “Grundbegriffe” by A. N. Kolmogorov (published in 1933 on education in the area of probability and its impact on research in stochastics cannot be overestimated. We would like to point out three aspects of the classical probability theory “calling for” an upgrade: (i classical random events are black-and-white (Boolean; (ii classical random variables do not model quantum phenomena; (iii basic maps (probability measures and observables { dual maps to random variables have very different “mathematical nature”. Accordingly, we propose an upgraded probability theory based on Łukasiewicz operations (multivalued logic on events, elementary category theory, and covering the classical probability theory as a special case. The upgrade can be compared to replacing calculations with integers by calculations with rational (and real numbers. Namely, to avoid the three objections, we embed the classical (Boolean random events (represented by the f0; 1g-valued indicator functions of sets into upgraded random events (represented by measurable {0; 1}-valued functions, the minimal domain of probability containing “fractions” of classical random events, and we upgrade the notions of probability measure and random variable.

  7. Probability theory a foundational course

    CERN Document Server

    Pakshirajan, R P

    2013-01-01

    This book shares the dictum of J. L. Doob in treating Probability Theory as a branch of Measure Theory and establishes this relation early. Probability measures in product spaces are introduced right at the start by way of laying the ground work to later claim the existence of stochastic processes with prescribed finite dimensional distributions. Other topics analysed in the book include supports of probability measures, zero-one laws in product measure spaces, Erdos-Kac invariance principle, functional central limit theorem and functional law of the iterated logarithm for independent variables, Skorohod embedding, and the use of analytic functions of a complex variable in the study of geometric ergodicity in Markov chains. This book is offered as a text book for students pursuing graduate programs in Mathematics and or Statistics. The book aims to help the teacher present the theory with ease, and to help the student sustain his interest and joy in learning the subject.

  8. Relativistic calculation of Kβ hypersatellite energies and transition probabilities for selected atoms with 13 ≤ Z ≤ 80

    International Nuclear Information System (INIS)

    Costa, A M; Martins, M C; Santos, J P; Indelicato, P; Parente, F

    2006-01-01

    Energies and transition probabilities of Kβ hypersatellite lines are computed using the Dirac-Fock model for several values of Z throughout the periodic table. The influence of the Breit interaction on the energy shifts from the corresponding diagram lines and on the Kβ h 1 /Kβ h 3 intensity ratio is evaluated. The widths of the double-K hole levels are calculated for Al and Sc. The results are compared to experiment and to other theoretical calculations

  9. A research on the importance function used in the calculation of the fracture probability through the optimum method

    International Nuclear Information System (INIS)

    Zegong, Zhou; Changhong, Liu

    1995-01-01

    On the basis of the research into original distribution function as the importance function after shifting an appropriate distance, this paper takes the variation of similar ratio of the original function to the importance function as the objective function, the optimum shifting distance obtained by use of an optimization method. The optimum importance function resulting from the optimization method can ensure that the number of Monte Carlo simulations is decreased and at the same time the good estimates of the yearly failure probabilities are obtained

  10. Construction of energy loss function for low-energy electrons in helium

    Energy Technology Data Exchange (ETDEWEB)

    Dayashankar, [Bhabha Atomic Research Centre, Bombay (India). Div. of Radiation Protection

    1976-02-01

    The energy loss function for electrons in the energy range from 50 eV to 1 keV in helium gas has been constructed by considering separately the energy loss in overcoming the ionization threshold, the loss manifested as kinetic energy of secondary electrons and the loss in the discrete state excitations. This has been done by utilizing recent measurements of Opal et al. on the energy spectrum of secondary electrons and incorporating the experimental data on cross sections for twenty-four excited states. The present results of the energy loss function are in good agreement with the Bethe formula for energies above 500 eV. For lower energies, where the Bethe formula is not applicable, the present results should be particularly useful.

  11. Low energy neutron scattering for energy dependent cross sections. General considerations

    Energy Technology Data Exchange (ETDEWEB)

    Rothenstein, W; Dagan, R [Technion-Israel Inst. of Tech., Haifa (Israel). Dept. of Mechanical Engineering

    1996-12-01

    We consider in this paper some aspects related to neutron scattering at low energies by nuclei which are subject to thermal agitation. The scattering is determined by a temperature dependent joint scattering kernel, or the corresponding joint probability density, which is a function of two variables, the neutron energy after scattering, and the cosine of the angle of scattering, for a specified energy and direction of motion of the neutron, before the interaction takes place. This joint probability density is easy to calculate, when the nucleus which causes the scattering of the neutron is at rest. It can be expressed by a delta function, since there is a one to one correspondence between the neutron energy change, and the cosine of the scattering angle. If the thermal motion of the target nucleus is taken into account, the calculation is rather more complicated. The delta function relation between the cosine of the angle of scattering and the neutron energy change is now averaged over the spectrum of velocities of the target nucleus, and becomes a joint kernel depending on both these variables. This function has a simple form, if the target nucleus behaves as an ideal gas, which has a scattering cross section independent of energy. An energy dependent scattering cross section complicates the treatment further. An analytic expression is no longer obtained for the ideal gas temperature dependent joint scattering kernel as a function of the neutron energy after the interaction and the cosine of the scattering angle. Instead the kernel is expressed by an inverse Fourier Transform of a complex integrand, which is averaged over the velocity spectrum of the target nucleus. (Abstract Truncated)

  12. Quantum processes: probability fluxes, transition probabilities in unit time and vacuum vibrations

    International Nuclear Information System (INIS)

    Oleinik, V.P.; Arepjev, Ju D.

    1989-01-01

    Transition probabilities in unit time and probability fluxes are compared in studying the elementary quantum processes -the decay of a bound state under the action of time-varying and constant electric fields. It is shown that the difference between these quantities may be considerable, and so the use of transition probabilities W instead of probability fluxes Π, in calculating the particle fluxes, may lead to serious errors. The quantity W represents the rate of change with time of the population of the energy levels relating partly to the real states and partly to the virtual ones, and it cannot be directly measured in experiment. The vacuum background is shown to be continuously distorted when a perturbation acts on a system. Because of this the viewpoint of an observer on the physical properties of real particles continuously varies with time. This fact is not taken into consideration in the conventional theory of quantum transitions based on using the notion of probability amplitude. As a result, the probability amplitudes lose their physical meaning. All the physical information on quantum dynamics of a system is contained in the mean values of physical quantities. The existence of considerable differences between the quantities W and Π permits one in principle to make a choice of the correct theory of quantum transitions on the basis of experimental data. (author)

  13. Measurement of Plutonium-240 Angular Momentum Dependent Fission Probabilities Using the Alpha-Alpha' Reaction

    Science.gov (United States)

    Koglin, Johnathon

    8:0MeV and one bin from 4:5MeV to 5:5MeV. Across energy bins the fission probability increases approximately linearly with increasing alpha' scattering angle. At 90° the fission probability increases from 0:069(6) in the lowest energy bin to 0:59(2) in the highest. Likewise, within a single energy bin the fission probability increases with alpha' scattering angle. Within the 6:5MeV and 7:0MeV energy bin, the fission probability increased from 0:41(1) at 60° to 0:81(10) at 140°. Fission fragment angular distributions were also measured integrated over each energy bin. These distributions were fit to theoretical distributions based on combinations of transitional nuclear vibrational and rotational excitations at the saddle point. Contributions from specific K vibrational states were extracted and combined with fission probability measurements to determine the relative fission probability of each state as a function of nuclear excitation energy. Within a given excitation energy bin, it is found that contributions from K states greater than the minimum K = 0 state tend to increase with the increasing alpha' scattering angle. This is attributed to an increase in the transferred angular momentum associated with larger scattering angles. The 90° alpha' scattering angle produced the highest quality results. The relative contributions of K states do not show a discernible trend across the energy spectrum. The energy-binned results confirm existing measurements that place a K = 2 state in the first energy bin with the opening of K = 1 and K = 4 states at energies above 5:5MeV. This experiment represents the first of its kind in which fission probabilities and angular distributions are simultaneously measured at a large number of scattering angles. The acquired fission probability, angular distribution, and K state contribution provide a diverse dataset against which microscopic fission models can be constrained and further the understanding of the properties of the 240Pu

  14. Statistical measurement of the gamma-ray source-count distribution as a function of energy

    Science.gov (United States)

    Zechlin, H.-S.; Cuoco, A.; Donato, F.; Fornengo, N.; Regis, M.

    2017-01-01

    Photon counts statistics have recently been proven to provide a sensitive observable for characterizing gamma-ray source populations and for measuring the composition of the gamma-ray sky. In this work, we generalize the use of the standard 1-point probability distribution function (1pPDF) to decompose the high-latitude gamma-ray emission observed with Fermi-LAT into: (i) point-source contributions, (ii) the Galactic foreground contribution, and (iii) a diffuse isotropic background contribution. We analyze gamma-ray data in five adjacent energy bands between 1 and 171 GeV. We measure the source-count distribution dN/dS as a function of energy, and demonstrate that our results extend current measurements from source catalogs to the regime of so far undetected sources. Our method improves the sensitivity for resolving point-source populations by about one order of magnitude in flux. The dN/dS distribution as a function of flux is found to be compatible with a broken power law. We derive upper limits on further possible breaks as well as the angular power of unresolved sources. We discuss the composition of the gamma-ray sky and capabilities of the 1pPDF method.

  15. Probability density function modeling of scalar mixing from concentrated sources in turbulent channel flow

    OpenAIRE

    Bakosi, J.; Franzese, P.; Boybeyi, Z.

    2010-01-01

    Dispersion of a passive scalar from concentrated sources in fully developed turbulent channel flow is studied with the probability density function (PDF) method. The joint PDF of velocity, turbulent frequency and scalar concentration is represented by a large number of Lagrangian particles. A stochastic near-wall PDF model combines the generalized Langevin model of Haworth & Pope with Durbin's method of elliptic relaxation to provide a mathematically exact treatment of convective and viscous ...

  16. Experimental evidence for the reducibility of multifragment emission probabilities

    International Nuclear Information System (INIS)

    Wozniak, G.J.; Tso, K.; Phair, L.

    1995-01-01

    Multifragmentation has been studied for 36 Ar-induced reactions on a 197 Au target at E/A = 80 and 110 MeV and for 129 Xe-induced reactions on several targets ( nat Cu, 89 y, 165 ho, 197 Au) and E/A = 40, 50 and 60 MeV. The probability of emitting n intermediate-mass-fragments is shown to be binomial at each transversal energy and reducible to an elementary binary probability p. For each target and at each bombarding energy, this probability p shows a thermal nature by giving linear Arrhenius plots. For the 129 Xe-induced reactions, a nearly universal linear Arrhenius plot is observed at each bombarding energy, indicating a large degree of target independence

  17. Introduction to probability theory with contemporary applications

    CERN Document Server

    Helms, Lester L

    2010-01-01

    This introduction to probability theory transforms a highly abstract subject into a series of coherent concepts. Its extensive discussions and clear examples, written in plain language, expose students to the rules and methods of probability. Suitable for an introductory probability course, this volume requires abstract and conceptual thinking skills and a background in calculus.Topics include classical probability, set theory, axioms, probability functions, random and independent random variables, expected values, and covariance and correlations. Additional subjects include stochastic process

  18. Joint probability discrimination between stationary tissue and blood velocity signals

    DEFF Research Database (Denmark)

    Schlaikjer, Malene; Jensen, Jørgen Arendt

    2001-01-01

    before and after echo-canceling, and (b) the amplitude variations between samples in consecutive RF-signals before and after echo-canceling. The statistical discriminator was obtained by computing the probability density functions (PDFs) for each feature through histogram analysis of data....... This study presents a new statistical discriminator. Investigation of the RF-signals reveals that features can be derived that distinguish the segments of the signal, which do an do not carry information on the blood flow. In this study 4 features, have been determined: (a) the energy content in the segments....... The discrimination is performed by determining the joint probability of the features for the segment under investigation and choosing the segment type that is most likely. The method was tested on simulated data resembling RF-signals from the carotid artery....

  19. An enviro-economic function for assessing energy resources for district energy systems

    International Nuclear Information System (INIS)

    Rezaie, Behnaz; Reddy, Bale V.; Rosen, Marc A.

    2014-01-01

    District energy (DE) systems provide an important means of mitigating greenhouse gas emissions and the significant related concerns associated with global climate change. DE systems can use fossil fuels, renewable energy and waste heat as energy sources, and facilitate intelligent integration of energy systems. In this study, an enviro-economic function is developed for assessing various energy sources for a district energy system. The DE system is assessed for the considered energy resources by considering two main factors: CO 2 emissions and economics. Using renewable energy resources and associated technologies as the energy suppliers for a DE system yields environmental benefits which can lead to financial advantages through such instruments as tax breaks; while fossil fuels are increasingly penalized by a carbon tax. Considering these factors as well as the financial value of the technology, an analysis approach is developed for energy suppliers of the DE system. In addition, the proposed approach is modified for the case when thermal energy storage is integrated into a DE system. - Highlights: • Developed a function to assess various energy sources for a district energy system. • Considered CO 2 emissions and economics as two main factors. • Applied renewable energy resources technologies as the suppliers for a DE system. • Yields environmental benefits can lead to financial benefits by tax breaks. • Modified enviro-economic function for the TES integrated into a DE system

  20. Probability Density Function Method for Observing Reconstructed Attractor Structure

    Institute of Scientific and Technical Information of China (English)

    陆宏伟; 陈亚珠; 卫青

    2004-01-01

    Probability density function (PDF) method is proposed for analysing the structure of the reconstructed attractor in computing the correlation dimensions of RR intervals of ten normal old men. PDF contains important information about the spatial distribution of the phase points in the reconstructed attractor. To the best of our knowledge, it is the first time that the PDF method is put forward for the analysis of the reconstructed attractor structure. Numerical simulations demonstrate that the cardiac systems of healthy old men are about 6 - 6.5 dimensional complex dynamical systems. It is found that PDF is not symmetrically distributed when time delay is small, while PDF satisfies Gaussian distribution when time delay is big enough. A cluster effect mechanism is presented to explain this phenomenon. By studying the shape of PDFs, that the roles played by time delay are more important than embedding dimension in the reconstruction is clearly indicated. Results have demonstrated that the PDF method represents a promising numerical approach for the observation of the reconstructed attractor structure and may provide more information and new diagnostic potential of the analyzed cardiac system.

  1. Assumed Probability Density Functions for Shallow and Deep Convection

    Directory of Open Access Journals (Sweden)

    Steven K Krueger

    2010-10-01

    Full Text Available The assumed joint probability density function (PDF between vertical velocity and conserved temperature and total water scalars has been suggested to be a relatively computationally inexpensive and unified subgrid-scale (SGS parameterization for boundary layer clouds and turbulent moments. This paper analyzes the performance of five families of PDFs using large-eddy simulations of deep convection, shallow convection, and a transition from stratocumulus to trade wind cumulus. Three of the PDF families are based on the double Gaussian form and the remaining two are the single Gaussian and a Double Delta Function (analogous to a mass flux model. The assumed PDF method is tested for grid sizes as small as 0.4 km to as large as 204.8 km. In addition, studies are performed for PDF sensitivity to errors in the input moments and for how well the PDFs diagnose some higher-order moments. In general, the double Gaussian PDFs more accurately represent SGS cloud structure and turbulence moments in the boundary layer compared to the single Gaussian and Double Delta Function PDFs for the range of grid sizes tested. This is especially true for small SGS cloud fractions. While the most complex PDF, Lewellen-Yoh, better represents shallow convective cloud properties (cloud fraction and liquid water mixing ratio compared to the less complex Analytic Double Gaussian 1 PDF, there appears to be no advantage in implementing Lewellen-Yoh for deep convection. However, the Analytic Double Gaussian 1 PDF better represents the liquid water flux, is less sensitive to errors in the input moments, and diagnoses higher order moments more accurately. Between the Lewellen-Yoh and Analytic Double Gaussian 1 PDFs, it appears that neither family is distinctly better at representing cloudy layers. However, due to the reduced computational cost and fairly robust results, it appears that the Analytic Double Gaussian 1 PDF could be an ideal family for SGS cloud and turbulence

  2. On the shake-off probability for atomic systems

    Energy Technology Data Exchange (ETDEWEB)

    Santos, A.C.F., E-mail: toniufrj@gmail.com [Instituto de Física, Universidade Federal do Rio de Janeiro, P.O. Box 68528, 21941-972 Rio de Janeiro, RJ (Brazil); Almeida, D.P. [Departamento de Física, Universidade Federal de Santa Catarina, 88040-900 Florianópolis (Brazil)

    2016-07-15

    Highlights: • The scope is to find the relationship among SO probabilities, Z and electron density. • A scaling law is suggested, allowing us to find the SO probabilities for atoms. • SO probabilities have been scaled as a function of target Z and polarizability. - Abstract: The main scope in this work has been upon the relationship between shake-off probabilities, target atomic number and electron density. By comparing the saturation values of measured double-to-single photoionization ratios from the literature, a simple scaling law has been found, which allows us to predict the shake-off probabilities for several elements up to Z = 54 within a factor 2. The electron shake-off probabilities accompanying valence shell photoionization have been scaled as a function of the target atomic number, Z, and polarizability, α. This behavior is in qualitative agreement with the experimental results.

  3. Probability densities and Lévy densities

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler

    For positive Lévy processes (i.e. subordinators) formulae are derived that express the probability density or the distribution function in terms of power series in time t. The applicability of the results to finance and to turbulence is briefly indicated.......For positive Lévy processes (i.e. subordinators) formulae are derived that express the probability density or the distribution function in terms of power series in time t. The applicability of the results to finance and to turbulence is briefly indicated....

  4. Energies, wavelengths, and transition probabilities for Ge-like Kr, Mo, Sn, and Xe ions

    International Nuclear Information System (INIS)

    Nagy, O.; El Sayed, Fatma

    2012-01-01

    Energy levels, wavelengths, transition probabilities, and oscillator strengths have been calculated for Ge-like Kr, Mo, Sn, and Xe ions among the fine-structure levels of terms belonging to the ([Ar] 3d 10 )4s 2 4p 2 , ([Ar] 3d 10 )4s 4p 3 , ([Ar] 3d 10 )4s 2 4p 4d, and ([Ar] 3d 10 )4p 4 configurations. The fully relativistic multiconfiguration Dirac–Fock method, taking both correlations within the n=4 complex and the quantum electrodynamic effects into account, have been used in the calculations. The results are compared with the available experimental and other theoretical results.

  5. Application of tests of goodness of fit in determining the probability density function for spacing of steel sets in tunnel support system

    Directory of Open Access Journals (Sweden)

    Farnoosh Basaligheh

    2015-12-01

    Full Text Available One of the conventional methods for temporary support of tunnels is to use steel sets with shotcrete. The nature of a temporary support system demands a quick installation of its structures. As a result, the spacing between steel sets is not a fixed amount and it can be considered as a random variable. Hence, in the reliability analysis of these types of structures, the selection of an appropriate probability distribution function of spacing of steel sets is essential. In the present paper, the distances between steel sets are collected from an under-construction tunnel and the collected data is used to suggest a proper Probability Distribution Function (PDF for the spacing of steel sets. The tunnel has two different excavation sections. In this regard, different distribution functions were investigated and three common tests of goodness of fit were used for evaluation of each function for each excavation section. Results from all three methods indicate that the Wakeby distribution function can be suggested as the proper PDF for spacing between the steel sets. It is also noted that, although the probability distribution function for two different tunnel sections is the same, the parameters of PDF for the individual sections are different from each other.

  6. Multiple-event probability in general-relativistic quantum mechanics

    International Nuclear Information System (INIS)

    Hellmann, Frank; Mondragon, Mauricio; Perez, Alejandro; Rovelli, Carlo

    2007-01-01

    We discuss the definition of quantum probability in the context of 'timeless' general-relativistic quantum mechanics. In particular, we study the probability of sequences of events, or multievent probability. In conventional quantum mechanics this can be obtained by means of the 'wave function collapse' algorithm. We first point out certain difficulties of some natural definitions of multievent probability, including the conditional probability widely considered in the literature. We then observe that multievent probability can be reduced to single-event probability, by taking into account the quantum nature of the measuring apparatus. In fact, by exploiting the von-Neumann freedom of moving the quantum/classical boundary, one can always trade a sequence of noncommuting quantum measurements at different times, with an ensemble of simultaneous commuting measurements on the joint system+apparatus system. This observation permits a formulation of quantum theory based only on single-event probability, where the results of the wave function collapse algorithm can nevertheless be recovered. The discussion also bears on the nature of the quantum collapse

  7. Ignition probabilities for Compact Ignition Tokamak designs

    International Nuclear Information System (INIS)

    Stotler, D.P.; Goldston, R.J.

    1989-09-01

    A global power balance code employing Monte Carlo techniques had been developed to study the ''probability of ignition'' and has been applied to several different configurations of the Compact Ignition Tokamak (CIT). Probability distributions for the critical physics parameters in the code were estimated using existing experimental data. This included a statistical evaluation of the uncertainty in extrapolating the energy confinement time. A substantial probability of ignition is predicted for CIT if peaked density profiles can be achieved or if one of the two higher plasma current configurations is employed. In other cases, values of the energy multiplication factor Q of order 10 are generally obtained. The Ignitor-U and ARIES designs are also examined briefly. Comparisons of our empirically based confinement assumptions with two theory-based transport models yield conflicting results. 41 refs., 11 figs

  8. Multiplicities and minijets at Tevatron Collider energies

    International Nuclear Information System (INIS)

    Sarcevic, I.

    1989-01-01

    We show that in the parton branching model, the probability distribution does not obey KNO scaling. As energy increases, gluon contribution to multiplicities increases, resulting in the widening of the probability distribution, in agreement with experimental data. We predict that the widening of the distribution will stop at Tevatron Collider energies due to the dominant role of gluons at these energies. We also find that the gluon contribution to the 'minijet' cross section increases with energy and becomes dominant at the Tevatron Collider. We calculate QCD minijet cross sections for a variety of structure functions, QCD scales and p T min . We compare our theoretical results with the experimental data and find that some of the structure functions and choices of scale are preferred by the experimental data. We give theoretical predictions for the minijet cross section at the Tevatron Collider, indicating the possibility of distinguishing between different sets of structure functions and choices of scale. (orig.)

  9. Fusing probability density function into Dempster-Shafer theory of evidence for the evaluation of water treatment plant.

    Science.gov (United States)

    Chowdhury, Shakhawat

    2013-05-01

    The evaluation of the status of a municipal drinking water treatment plant (WTP) is important. The evaluation depends on several factors, including, human health risks from disinfection by-products (R), disinfection performance (D), and cost (C) of water production and distribution. The Dempster-Shafer theory (DST) of evidence can combine the individual status with respect to R, D, and C to generate a new indicator, from which the overall status of a WTP can be evaluated. In the DST, the ranges of different factors affecting the overall status are divided into several segments. The basic probability assignments (BPA) for each segment of these factors are provided by multiple experts, which are then combined to obtain the overall status. In assigning the BPA, the experts use their individual judgments, which can impart subjective biases in the overall evaluation. In this research, an approach has been introduced to avoid the assignment of subjective BPA. The factors contributing to the overall status were characterized using the probability density functions (PDF). The cumulative probabilities for different segments of these factors were determined from the cumulative density function, which were then assigned as the BPA for these factors. A case study is presented to demonstrate the application of PDF in DST to evaluate a WTP, leading to the selection of the required level of upgradation for the WTP.

  10. Energy master equation

    DEFF Research Database (Denmark)

    Dyre, Jeppe

    1995-01-01

    energies chosen randomly according to a Gaussian. The random-walk model is here derived from Newton's laws by making a number of simplifying assumptions. In the second part of the paper an approximate low-temperature description of energy fluctuations in the random-walk model—the energy master equation...... (EME)—is arrived at. The EME is one dimensional and involves only energy; it is derived by arguing that percolation dominates the relaxational properties of the random-walk model at low temperatures. The approximate EME description of the random-walk model is expected to be valid at low temperatures...... of the random-walk model. The EME allows a calculation of the energy probability distribution at realistic laboratory time scales for an arbitrarily varying temperature as function of time. The EME is probably the only realistic equation available today with this property that is also explicitly consistent...

  11. Characterizing the Lyα forest flux probability distribution function using Legendre polynomials

    Energy Technology Data Exchange (ETDEWEB)

    Cieplak, Agnieszka M.; Slosar, Anže, E-mail: acieplak@bnl.gov, E-mail: anze@bnl.gov [Brookhaven National Laboratory, Bldg 510, Upton, NY, 11973 (United States)

    2017-10-01

    The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n -th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. We find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.

  12. Characterizing the Lyα forest flux probability distribution function using Legendre polynomials

    Science.gov (United States)

    Cieplak, Agnieszka M.; Slosar, Anže

    2017-10-01

    The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n-th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. We find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.

  13. Building a universal nuclear energy density functional

    International Nuclear Information System (INIS)

    Bertsch, G F

    2007-01-01

    This talk describes a new project in SciDAC II in the area of low-energy nuclear physics. The motivation and goals of the SciDAC are presented as well as an outline of the theoretical and computational methodology that will be employed. An important motivation is to have more accurate and reliable predictions of nuclear properties including their binding energies and low-energy reaction rates. The theoretical basis is provided by density functional theory, which the only available theory that can be systematically applied to all nuclei. However, other methodologies based on wave function methods are needed to refine the functionals and to make applications to dynamic processes

  14. Ground state energy and wave function of an off-centre donor in spherical core/shell nanostructures: Dielectric mismatch and impurity position effects

    Energy Technology Data Exchange (ETDEWEB)

    Ibral, Asmaa [Equipe d’Optique et Electronique du Solide, Département de Physique, Faculté des Sciences, Université Chouaïb Doukkali, B.P. 20 El Jadida Principale, El Jadida 24000 (Morocco); Laboratoire d’Instrumentation, Mesure et Contrôle, Département de Physique, Université Chouaïb Doukkali, B.P. 20 El Jadida Principale, El Jadida (Morocco); Zouitine, Asmae [Département de Physique, Ecole Nationale Supérieure d’Enseignement Technique, Université Mohammed V Souissi, B.P. 6207 Rabat-Instituts, Rabat (Morocco); Assaid, El Mahdi, E-mail: eassaid@yahoo.fr [Equipe d’Optique et Electronique du Solide, Département de Physique, Faculté des Sciences, Université Chouaïb Doukkali, B.P. 20 El Jadida Principale, El Jadida 24000 (Morocco); Laboratoire d’Instrumentation, Mesure et Contrôle, Département de Physique, Université Chouaïb Doukkali, B.P. 20 El Jadida Principale, El Jadida (Morocco); Feddi, El Mustapha [Département de Physique, Ecole Nationale Supérieure d’Enseignement Technique, Université Mohammed V Souissi, B.P. 6207 Rabat-Instituts, Rabat (Morocco); and others

    2014-09-15

    Ground state energy and wave function of a hydrogen-like off-centre donor impurity, confined anywhere in a ZnS/CdSe spherical core/shell nanostructure are determined in the framework of the envelope function approximation. Conduction band-edge alignment between core and shell of nanostructure is described by a finite height barrier. Dielectric constant mismatch at the surface where core and shell materials meet is taken into account. Electron effective mass mismatch at the inner surface between core and shell is considered. A trial wave function where coulomb attraction between electron and off-centre ionized donor is used to calculate ground state energy via the Ritz variational principle. The numerical approach developed enables access to the dependence of binding energy, coulomb correlation parameter, spatial extension and radial probability density with respect to core radius, shell radius and impurity position inside ZnS/CdSe core/shell nanostructure.

  15. Ground state energy and wave function of an off-centre donor in spherical core/shell nanostructures: Dielectric mismatch and impurity position effects

    International Nuclear Information System (INIS)

    Ibral, Asmaa; Zouitine, Asmae; Assaid, El Mahdi; Feddi, El Mustapha

    2014-01-01

    Ground state energy and wave function of a hydrogen-like off-centre donor impurity, confined anywhere in a ZnS/CdSe spherical core/shell nanostructure are determined in the framework of the envelope function approximation. Conduction band-edge alignment between core and shell of nanostructure is described by a finite height barrier. Dielectric constant mismatch at the surface where core and shell materials meet is taken into account. Electron effective mass mismatch at the inner surface between core and shell is considered. A trial wave function where coulomb attraction between electron and off-centre ionized donor is used to calculate ground state energy via the Ritz variational principle. The numerical approach developed enables access to the dependence of binding energy, coulomb correlation parameter, spatial extension and radial probability density with respect to core radius, shell radius and impurity position inside ZnS/CdSe core/shell nanostructure

  16. Calculation of transition probabilities using the multiconfiguration Dirac-Fock method

    International Nuclear Information System (INIS)

    Kim, Yong Ki; Desclaux, Jean Paul; Indelicato, Paul

    1998-01-01

    The performance of the multiconfiguration Dirac-Fock (MCDF) method in calculating transition probabilities of atoms is reviewed. In general, the MCDF wave functions will lead to transition probabilities accurate to ∼ 10% or better for strong, electric-dipole allowed transitions for small atoms. However, it is more difficult to get reliable transition probabilities for weak transitions. Also, some MCDF wave functions for a specific J quantum number may not reduce to the appropriate L and S quantum numbers in the nonrelativistic limit. Transition probabilities calculated from such MCDF wave functions for nonrelativistically forbidden transitions are unreliable. Remedies for such cases are discussed

  17. On the probability of cure for heavy-ion radiotherapy

    International Nuclear Information System (INIS)

    Hanin, Leonid; Zaider, Marco

    2014-01-01

    The probability of a cure in radiation therapy (RT)—viewed as the probability of eventual extinction of all cancer cells—is unobservable, and the only way to compute it is through modeling the dynamics of cancer cell population during and post-treatment. The conundrum at the heart of biophysical models aimed at such prospective calculations is the absence of information on the initial size of the subpopulation of clonogenic cancer cells (also called stem-like cancer cells), that largely determines the outcome of RT, both in an individual and population settings. Other relevant parameters (e.g. potential doubling time, cell loss factor and survival probability as a function of dose) are, at least in principle, amenable to empirical determination. In this article we demonstrate that, for heavy-ion RT, microdosimetric considerations (justifiably ignored in conventional RT) combined with an expression for the clone extinction probability obtained from a mechanistic model of radiation cell survival lead to useful upper bounds on the size of the pre-treatment population of clonogenic cancer cells as well as upper and lower bounds on the cure probability. The main practical impact of these limiting values is the ability to make predictions about the probability of a cure for a given population of patients treated to newer, still unexplored treatment modalities from the empirically determined probability of a cure for the same or similar population resulting from conventional low linear energy transfer (typically photon/electron) RT. We also propose that the current trend to deliver a lower total dose in a smaller number of fractions with larger-than-conventional doses per fraction has physical limits that must be understood before embarking on a particular treatment schedule. (paper)

  18. Electroweak splitting functions and high energy showering

    Science.gov (United States)

    Chen, Junmou; Han, Tao; Tweedie, Brock

    2017-11-01

    We derive the electroweak (EW) collinear splitting functions for the Standard Model, including the massive fermions, gauge bosons and the Higgs boson. We first present the splitting functions in the limit of unbroken SU(2) L × U(1) Y and discuss their general features in the collinear and soft-collinear regimes. These are the leading contributions at a splitting scale ( k T ) far above the EW scale ( v). We then systematically incorporate EW symmetry breaking (EWSB), which leads to the emergence of additional "ultra-collinear" splitting phenomena and naive violations of the Goldstone-boson Equivalence Theorem. We suggest a particularly convenient choice of non-covariant gauge (dubbed "Goldstone Equivalence Gauge") that disentangles the effects of Goldstone bosons and gauge fields in the presence of EWSB, and allows trivial book-keeping of leading power corrections in v/ k T . We implement a comprehensive, practical EW showering scheme based on these splitting functions using a Sudakov evolution formalism. Novel features in the implementation include a complete accounting of ultra-collinear effects, matching between shower and decay, kinematic back-reaction corrections in multi-stage showers, and mixed-state evolution of neutral bosons ( γ/ Z/ h) using density-matrices. We employ the EW showering formalism to study a number of important physical processes at O (1-10 TeV) energies. They include (a) electroweak partons in the initial state as the basis for vector-boson-fusion; (b) the emergence of "weak jets" such as those initiated by transverse gauge bosons, with individual splitting probabilities as large as O (35%); (c) EW showers initiated by top quarks, including Higgs bosons in the final state; (d) the occurrence of O (1) interference effects within EW showers involving the neutral bosons; and (e) EW corrections to new physics processes, as illustrated by production of a heavy vector boson ( W ') and the subsequent showering of its decay products.

  19. Approximation of Measurement Results of “Emergency” Signal Reception Probability

    Directory of Open Access Journals (Sweden)

    Gajda Stanisław

    2017-08-01

    Full Text Available The intended aim of this article is to present approximation results of the exemplary measurements of EMERGENCY signal reception probability. The probability is under-stood as a distance function between the aircraft and a ground-based system under established conditions. The measurements were approximated using the properties of logistic functions. This probability, as a distance function, enables to determine the range of the EMERGENCY signal for a pre-set confidence level.

  20. From free energy to expected energy: Improving energy-based value function approximation in reinforcement learning.

    Science.gov (United States)

    Elfwing, Stefan; Uchibe, Eiji; Doya, Kenji

    2016-12-01

    Free-energy based reinforcement learning (FERL) was proposed for learning in high-dimensional state and action spaces. However, the FERL method does only really work well with binary, or close to binary, state input, where the number of active states is fewer than the number of non-active states. In the FERL method, the value function is approximated by the negative free energy of a restricted Boltzmann machine (RBM). In our earlier study, we demonstrated that the performance and the robustness of the FERL method can be improved by scaling the free energy by a constant that is related to the size of network. In this study, we propose that RBM function approximation can be further improved by approximating the value function by the negative expected energy (EERL), instead of the negative free energy, as well as being able to handle continuous state input. We validate our proposed method by demonstrating that EERL: (1) outperforms FERL, as well as standard neural network and linear function approximation, for three versions of a gridworld task with high-dimensional image state input; (2) achieves new state-of-the-art results in stochastic SZ-Tetris in both model-free and model-based learning settings; and (3) significantly outperforms FERL and standard neural network function approximation for a robot navigation task with raw and noisy RGB images as state input and a large number of actions. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  1. Functional materials discovery using energy-structure-function maps.

    Science.gov (United States)

    Pulido, Angeles; Chen, Linjiang; Kaczorowski, Tomasz; Holden, Daniel; Little, Marc A; Chong, Samantha Y; Slater, Benjamin J; McMahon, David P; Bonillo, Baltasar; Stackhouse, Chloe J; Stephenson, Andrew; Kane, Christopher M; Clowes, Rob; Hasell, Tom; Cooper, Andrew I; Day, Graeme M

    2017-03-30

    Molecular crystals cannot be designed in the same manner as macroscopic objects, because they do not assemble according to simple, intuitive rules. Their structures result from the balance of many weak interactions, rather than from the strong and predictable bonding patterns found in metal-organic frameworks and covalent organic frameworks. Hence, design strategies that assume a topology or other structural blueprint will often fail. Here we combine computational crystal structure prediction and property prediction to build energy-structure-function maps that describe the possible structures and properties that are available to a candidate molecule. Using these maps, we identify a highly porous solid, which has the lowest density reported for a molecular crystal so far. Both the structure of the crystal and its physical properties, such as methane storage capacity and guest-molecule selectivity, are predicted using the molecular structure as the only input. More generally, energy-structure-function maps could be used to guide the experimental discovery of materials with any target function that can be calculated from predicted crystal structures, such as electronic structure or mechanical properties.

  2. Uncertainty of Hydrological Drought Characteristics with Copula Functions and Probability Distributions: A Case Study of Weihe River, China

    Directory of Open Access Journals (Sweden)

    Panpan Zhao

    2017-05-01

    Full Text Available This study investigates the sensitivity and uncertainty of hydrological droughts frequencies and severity in the Weihe Basin, China during 1960–2012, by using six commonly used univariate probability distributions and three Archimedean copulas to fit the marginal and joint distributions of drought characteristics. The Anderson-Darling method is used for testing the goodness-of-fit of the univariate model, and the Akaike information criterion (AIC is applied to select the best distribution and copula functions. The results demonstrate that there is a very strong correlation between drought duration and drought severity in three stations. The drought return period varies depending on the selected marginal distributions and copula functions and, with an increase of the return period, the differences become larger. In addition, the estimated return periods (both co-occurrence and joint from the best-fitted copulas are the closet to those from empirical distribution. Therefore, it is critical to select the appropriate marginal distribution and copula function to model the hydrological drought frequency and severity. The results of this study can not only help drought investigation to select a suitable probability distribution and copulas function, but are also useful for regional water resource management. However, a few limitations remain in this study, such as the assumption of stationary of runoff series.

  3. Kramers-Kronig transform for the surface energy loss function

    International Nuclear Information System (INIS)

    Tan, G.L.; DeNoyer, L.K.; French, R.H.; Guittet, M.J.; Gautier-Soyer, M.

    2005-01-01

    A new pair of Kramers-Kronig (KK) dispersion relationships for the transformation of surface energy loss function Im[-1/(ε + 1)] has been proposed. The validity of the new surface KK transform is confirmed, using both a Lorentz oscillator model and the surface energy loss functions determined from the experimental complex dielectric function of SrTiO 3 and tungsten metal. The interband transition strength spectra (J cv ) have been derived either directly from the original complex dielectric function or from the derived dielectric function obtained from the KK transform of the surface energy loss function. The original J cv trace and post-J cv trace overlapped together for the three modes, indicating that the new surface Kramers-Kronig dispersion relationship is valid for the surface energy loss function

  4. Interplanetary ions during an energetic storm particle event - The distribution function from solar wind thermal energies to 1.6 MeV

    Science.gov (United States)

    Gosling, J. T.; Asbridge, J. R.; Bame, S. J.; Feldman, W. C.; Zwickl, R. D.; Paschmann, G.; Sckopke, N.; Hynds, R. J.

    1981-01-01

    An ion velocity distribution function of the postshock phase of an energetic storm particle (ESP) event is obtained from data from the ISEE 2 and ISEE 3 experiments. The distribution function is roughly isotropic in the solar wind frame from solar wind thermal energies to 1.6 MeV. The ESP event studied (8/27/78) is superposed upon a more energetic particle event which was predominantly field-aligned and which was probably of solar origin. The observations suggest that the ESP population is accelerated directly out of the solar wind thermal population or its quiescent suprathermal tail by a stochastic process associated with shock wave disturbance. The acceleration mechanism is sufficiently efficient so that approximately 1% of the solar wind population is accelerated to suprathermal energies. These suprathermal particles have an energy density of approximately 290 eV cubic centimeters.

  5. On the discretization of probability density functions and the ...

    Indian Academy of Sciences (India)

    important for most applications or theoretical problems of interest. In statistics ... In probability theory, statistics, statistical mechanics, communication theory, and other .... (1) by taking advantage of SMVT as a general mathematical approach.

  6. Approach to kinetic energy density functionals: Nonlocal terms with the structure of the von Weizsaecker functional

    International Nuclear Information System (INIS)

    Garcia-Aldea, David; Alvarellos, J. E.

    2008-01-01

    We propose a kinetic energy density functional scheme with nonlocal terms based on the von Weizsaecker functional, instead of the more traditional approach where the nonlocal terms have the structure of the Thomas-Fermi functional. The proposed functionals recover the exact kinetic energy and reproduce the linear response function of homogeneous electron systems. In order to assess their quality, we have tested the total kinetic energies as well as the kinetic energy density for atoms. The results show that these nonlocal functionals give as good results as the most sophisticated functionals in the literature. The proposed scheme for constructing the functionals means a step ahead in the field of fully nonlocal kinetic energy functionals, because they are capable of giving better local behavior than the semilocal functionals, yielding at the same time accurate results for total kinetic energies. Moreover, the functionals enjoy the possibility of being evaluated as a single integral in momentum space if an adequate reference density is defined, and then quasilinear scaling for the computational cost can be achieved

  7. A new formula for normal tissue complication probability (NTCP) as a function of equivalent uniform dose (EUD).

    Science.gov (United States)

    Luxton, Gary; Keall, Paul J; King, Christopher R

    2008-01-07

    To facilitate the use of biological outcome modeling for treatment planning, an exponential function is introduced as a simpler equivalent to the Lyman formula for calculating normal tissue complication probability (NTCP). The single parameter of the exponential function is chosen to reproduce the Lyman calculation to within approximately 0.3%, and thus enable easy conversion of data contained in empirical fits of Lyman parameters for organs at risk (OARs). Organ parameters for the new formula are given in terms of Lyman model m and TD(50), and conversely m and TD(50) are expressed in terms of the parameters of the new equation. The role of the Lyman volume-effect parameter n is unchanged from its role in the Lyman model. For a non-homogeneously irradiated OAR, an equation relates d(ref), n, v(eff) and the Niemierko equivalent uniform dose (EUD), where d(ref) and v(eff) are the reference dose and effective fractional volume of the Kutcher-Burman reduction algorithm (i.e. the LKB model). It follows in the LKB model that uniform EUD irradiation of an OAR results in the same NTCP as the original non-homogeneous distribution. The NTCP equation is therefore represented as a function of EUD. The inverse equation expresses EUD as a function of NTCP and is used to generate a table of EUD versus normal tissue complication probability for the Emami-Burman parameter fits as well as for OAR parameter sets from more recent data.

  8. A new formula for normal tissue complication probability (NTCP) as a function of equivalent uniform dose (EUD)

    International Nuclear Information System (INIS)

    Luxton, Gary; Keall, Paul J; King, Christopher R

    2008-01-01

    To facilitate the use of biological outcome modeling for treatment planning, an exponential function is introduced as a simpler equivalent to the Lyman formula for calculating normal tissue complication probability (NTCP). The single parameter of the exponential function is chosen to reproduce the Lyman calculation to within ∼0.3%, and thus enable easy conversion of data contained in empirical fits of Lyman parameters for organs at risk (OARs). Organ parameters for the new formula are given in terms of Lyman model m and TD 50 , and conversely m and TD 50 are expressed in terms of the parameters of the new equation. The role of the Lyman volume-effect parameter n is unchanged from its role in the Lyman model. For a non-homogeneously irradiated OAR, an equation relates d ref , n, v eff and the Niemierko equivalent uniform dose (EUD), where d ref and v eff are the reference dose and effective fractional volume of the Kutcher-Burman reduction algorithm (i.e. the LKB model). It follows in the LKB model that uniform EUD irradiation of an OAR results in the same NTCP as the original non-homogeneous distribution. The NTCP equation is therefore represented as a function of EUD. The inverse equation expresses EUD as a function of NTCP and is used to generate a table of EUD versus normal tissue complication probability for the Emami-Burman parameter fits as well as for OAR parameter sets from more recent data

  9. Alternative probability theories for cognitive psychology.

    Science.gov (United States)

    Narens, Louis

    2014-01-01

    Various proposals for generalizing event spaces for probability functions have been put forth in the mathematical, scientific, and philosophic literatures. In cognitive psychology such generalizations are used for explaining puzzling results in decision theory and for modeling the influence of context effects. This commentary discusses proposals for generalizing probability theory to event spaces that are not necessarily boolean algebras. Two prominent examples are quantum probability theory, which is based on the set of closed subspaces of a Hilbert space, and topological probability theory, which is based on the set of open sets of a topology. Both have been applied to a variety of cognitive situations. This commentary focuses on how event space properties can influence probability concepts and impact cognitive modeling. Copyright © 2013 Cognitive Science Society, Inc.

  10. Probability density functions of photochemicals over a coastal area of Northern Italy

    International Nuclear Information System (INIS)

    Georgiadis, T.; Fortezza, F.; Alberti, L.; Strocchi, V.; Marani, A.; Dal Bo', G.

    1998-01-01

    The present paper surveys the findings of experimental studies and analyses of statistical probability density functions (PDFs) applied to air pollutant concentrations to provide an interpretation of the ground-level distributions of photochemical oxidants in the coastal area of Ravenna (Italy). The atmospheric-pollution data set was collected from the local environmental monitoring network for the period 1978-1989. Results suggest that the statistical distribution of surface ozone, once normalised over the solar radiation PDF for the whole measurement period, follows a log-normal law as found for other pollutants. Although the Weibull distribution also offers a good fit of the experimental data, the area's meteorological features seem to favour the former distribution once the statistical index estimates have been analysed. Local transport phenomena are discussed to explain the data tail trends

  11. Application of an excited state LDA exchange energy functional for the calculation of transition energy of atoms within time-independent density functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Shamim, Md; Harbola, Manoj K, E-mail: sami@iitk.ac.i, E-mail: mkh@iitk.ac.i [Department of Physics, Indian Institute of Technology, Kanpur 208 016 (India)

    2010-11-14

    Transition energies of a new class of excited states (two-gap systems) of various atoms are calculated in time-independent density functional formalism by using a recently proposed local density approximation exchange energy functional for excited states. It is shown that the excitation energies calculated with this functional compare well with those calculated with exact exchange theories.

  12. Application of an excited state LDA exchange energy functional for the calculation of transition energy of atoms within time-independent density functional theory

    International Nuclear Information System (INIS)

    Shamim, Md; Harbola, Manoj K

    2010-01-01

    Transition energies of a new class of excited states (two-gap systems) of various atoms are calculated in time-independent density functional formalism by using a recently proposed local density approximation exchange energy functional for excited states. It is shown that the excitation energies calculated with this functional compare well with those calculated with exact exchange theories.

  13. Classical probabilities for Majorana and Weyl spinors

    International Nuclear Information System (INIS)

    Wetterich, C.

    2011-01-01

    Highlights: → Map of classical statistical Ising model to fermionic quantum field theory. → Lattice-regularized real Grassmann functional integral for single Weyl spinor. → Emerging complex structure characteristic for quantum physics. → A classical statistical ensemble describes a quantum theory. - Abstract: We construct a map between the quantum field theory of free Weyl or Majorana fermions and the probability distribution of a classical statistical ensemble for Ising spins or discrete bits. More precisely, a Grassmann functional integral based on a real Grassmann algebra specifies the time evolution of the real wave function q τ (t) for the Ising states τ. The time dependent probability distribution of a generalized Ising model obtains as p τ (t)=q τ 2 (t). The functional integral employs a lattice regularization for single Weyl or Majorana spinors. We further introduce the complex structure characteristic for quantum mechanics. Probability distributions of the Ising model which correspond to one or many propagating fermions are discussed explicitly. Expectation values of observables can be computed equivalently in the classical statistical Ising model or in the quantum field theory for fermions.

  14. Free probability and random matrices

    CERN Document Server

    Mingo, James A

    2017-01-01

    This volume opens the world of free probability to a wide variety of readers. From its roots in the theory of operator algebras, free probability has intertwined with non-crossing partitions, random matrices, applications in wireless communications, representation theory of large groups, quantum groups, the invariant subspace problem, large deviations, subfactors, and beyond. This book puts a special emphasis on the relation of free probability to random matrices, but also touches upon the operator algebraic, combinatorial, and analytic aspects of the theory. The book serves as a combination textbook/research monograph, with self-contained chapters, exercises scattered throughout the text, and coverage of important ongoing progress of the theory. It will appeal to graduate students and all mathematicians interested in random matrices and free probability from the point of view of operator algebras, combinatorics, analytic functions, or applications in engineering and statistical physics.

  15. Optimal Energy Efficiency Fairness of Nodes in Wireless Powered Communication Networks.

    Science.gov (United States)

    Zhang, Jing; Zhou, Qingjie; Ng, Derrick Wing Kwan; Jo, Minho

    2017-09-15

    In wireless powered communication networks (WPCNs), it is essential to research energy efficiency fairness in order to evaluate the balance of nodes for receiving information and harvesting energy. In this paper, we propose an efficient iterative algorithm for optimal energy efficiency proportional fairness in WPCN. The main idea is to use stochastic geometry to derive the mean proportionally fairness utility function with respect to user association probability and receive threshold. Subsequently, we prove that the relaxed proportionally fairness utility function is a concave function for user association probability and receive threshold, respectively. At the same time, a sub-optimal algorithm by exploiting alternating optimization approach is proposed. Through numerical simulations, we demonstrate that our sub-optimal algorithm can obtain a result close to optimal energy efficiency proportional fairness with significant reduction of computational complexity.

  16. Probability distribution functions of turbulence in seepage-affected alluvial channel

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Anurag; Kumar, Bimlesh, E-mail: anurag.sharma@iitg.ac.in, E-mail: bimk@iitg.ac.in [Department of Civil Engineering, Indian Institute of Technology Guwahati, 781039 (India)

    2017-02-15

    The present experimental study is carried out on the probability distribution functions (PDFs) of turbulent flow characteristics within near-bed-surface and away-from-bed surfaces for both no seepage and seepage flow. Laboratory experiments were conducted in the plane sand bed for no seepage (NS), 10% seepage (10%S) and 15% seepage (15%) cases. The experimental calculation of the PDFs of turbulent parameters such as Reynolds shear stress, velocity fluctuations, and bursting events is compared with theoretical expression obtained by Gram–Charlier (GC)-based exponential distribution. Experimental observations follow the computed PDF distributions for both no seepage and seepage cases. Jensen-Shannon divergence (JSD) method is used to measure the similarity between theoretical and experimental PDFs. The value of JSD for PDFs of velocity fluctuation lies between 0.0005 to 0.003 while the JSD value for PDFs of Reynolds shear stress varies between 0.001 to 0.006. Even with the application of seepage, the PDF distribution of bursting events, sweeps and ejections are well characterized by the exponential distribution of the GC series, except that a slight deflection of inward and outward interactions is observed which may be due to weaker events. The value of JSD for outward and inward interactions ranges from 0.0013 to 0.032, while the JSD value for sweep and ejection events varies between 0.0001 to 0.0025. The theoretical expression for the PDF of turbulent intensity is developed in the present study, which agrees well with the experimental observations and JSD lies between 0.007 and 0.015. The work presented is potentially applicable to the probability distribution of mobile-bed sediments in seepage-affected alluvial channels typically characterized by the various turbulent parameters. The purpose of PDF estimation from experimental data is that it provides a complete numerical description in the areas of turbulent flow either at a single or finite number of points

  17. Kinetic-energy density functional: Atoms and shell structure

    International Nuclear Information System (INIS)

    Garcia-Gonzalez, P.; Alvarellos, J.E.; Chacon, E.

    1996-01-01

    We present a nonlocal kinetic-energy functional which includes an anisotropic average of the density through a symmetrization procedure. This functional allows a better description of the nonlocal effects of the electron system. The main consequence of the symmetrization is the appearance of a clear shell structure in the atomic density profiles, obtained after the minimization of the total energy. Although previous results with some of the nonlocal kinetic functionals have given incipient structures for heavy atoms, only our functional shows a clear shell structure for most of the atoms. The atomic total energies have a good agreement with the exact calculations. Discussion of the chemical potential and the first ionization potential in atoms is included. The functional is also extended to spin-polarized systems. copyright 1996 The American Physical Society

  18. The ground state energy of a classical gas

    International Nuclear Information System (INIS)

    Conlon, J.G.

    1983-01-01

    The ground state energy of a classical gas is treated using a probability function for the position of the particles and a potential function. The lower boundary for the energy when the particle number is large is defined as ground state energy. The coulomb gas consisting of positive and negative particles is also treated (fixed and variable density case) the stability of the relativistic system is investigated as well. (H.B.)

  19. Interactive design of probability density functions for shape grammars

    KAUST Repository

    Dang, Minh; Lienhard, Stefan; Ceylan, Duygu; Neubert, Boris; Wonka, Peter; Pauly, Mark

    2015-01-01

    A shape grammar defines a procedural shape space containing a variety of models of the same class, e.g. buildings, trees, furniture, airplanes, bikes, etc. We present a framework that enables a user to interactively design a probability density

  20. Approximation methods in probability theory

    CERN Document Server

    Čekanavičius, Vydas

    2016-01-01

    This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.

  1. Survival probability for diffractive dijet production in p anti p collisions from next-to-leading order calculations

    International Nuclear Information System (INIS)

    Klasen, M.; Kramer, G.

    2009-08-01

    We perform next-to-leading order calculations of the single-diffractive and non-diffractive cross sections for dijet production in proton-antiproton collisions at the Tevatron. By comparing their ratio to the data published by the CDF collaboration for two different center-of-mass energies, we deduce the rapidity-gap survival probability as a function of the momentum fraction of the parton in the antiproton. Assuming Regge factorization, this probability can be interpreted as a suppression factor for the diffractive structure function measured in deep-inelastic scattering at HERA. In contrast to the observations for photoproduction, the suppression factor in protonantiproton collisions depends on the momentum fraction of the parton in the Pomeron even at next-to-leading order. (orig.)

  2. Functional derivative of noninteracting kinetic energy density functional

    International Nuclear Information System (INIS)

    Liu Shubin; Ayers, Paul W.

    2004-01-01

    Proofs from different theoretical frameworks, namely, the Hohenbergh-Kohn theorems, the Kohn-Sham scheme, and the first-order density matrix representation, have been presented in this paper to show that the functional derivative of the noninteracting kinetic energy density functional can uniquely be expressed as the negative of the Kohn-Sham effective potential, arbitrary only to an additive orbital-independent constant. Key points leading to the current result as well as confusion about the quantity in the literature are briefly discussed

  3. Generalized Probability-Probability Plots

    NARCIS (Netherlands)

    Mushkudiani, N.A.; Einmahl, J.H.J.

    2004-01-01

    We introduce generalized Probability-Probability (P-P) plots in order to study the one-sample goodness-of-fit problem and the two-sample problem, for real valued data.These plots, that are constructed by indexing with the class of closed intervals, globally preserve the properties of classical P-P

  4. Probability Aggregates in Probability Answer Set Programming

    OpenAIRE

    Saad, Emad

    2013-01-01

    Probability answer set programming is a declarative programming that has been shown effective for representing and reasoning about a variety of probability reasoning tasks. However, the lack of probability aggregates, e.g. {\\em expected values}, in the language of disjunctive hybrid probability logic programs (DHPP) disallows the natural and concise representation of many interesting problems. In this paper, we extend DHPP to allow arbitrary probability aggregates. We introduce two types of p...

  5. Development and evaluation of probability density functions for a set of human exposure factors

    Energy Technology Data Exchange (ETDEWEB)

    Maddalena, R.L.; McKone, T.E.; Bodnar, A.; Jacobson, J.

    1999-06-01

    The purpose of this report is to describe efforts carried out during 1998 and 1999 at the Lawrence Berkeley National Laboratory to assist the U.S. EPA in developing and ranking the robustness of a set of default probability distributions for exposure assessment factors. Among the current needs of the exposure-assessment community is the need to provide data for linking exposure, dose, and health information in ways that improve environmental surveillance, improve predictive models, and enhance risk assessment and risk management (NAS, 1994). The U.S. Environmental Protection Agency (EPA) Office of Emergency and Remedial Response (OERR) plays a lead role in developing national guidance and planning future activities that support the EPA Superfund Program. OERR is in the process of updating its 1989 Risk Assessment Guidance for Superfund (RAGS) as part of the EPA Superfund reform activities. Volume III of RAGS, when completed in 1999 will provide guidance for conducting probabilistic risk assessments. This revised document will contain technical information including probability density functions (PDFs) and methods used to develop and evaluate these PDFs. The PDFs provided in this EPA document are limited to those relating to exposure factors.

  6. Development and evaluation of probability density functions for a set of human exposure factors

    International Nuclear Information System (INIS)

    Maddalena, R.L.; McKone, T.E.; Bodnar, A.; Jacobson, J.

    1999-01-01

    The purpose of this report is to describe efforts carried out during 1998 and 1999 at the Lawrence Berkeley National Laboratory to assist the U.S. EPA in developing and ranking the robustness of a set of default probability distributions for exposure assessment factors. Among the current needs of the exposure-assessment community is the need to provide data for linking exposure, dose, and health information in ways that improve environmental surveillance, improve predictive models, and enhance risk assessment and risk management (NAS, 1994). The U.S. Environmental Protection Agency (EPA) Office of Emergency and Remedial Response (OERR) plays a lead role in developing national guidance and planning future activities that support the EPA Superfund Program. OERR is in the process of updating its 1989 Risk Assessment Guidance for Superfund (RAGS) as part of the EPA Superfund reform activities. Volume III of RAGS, when completed in 1999 will provide guidance for conducting probabilistic risk assessments. This revised document will contain technical information including probability density functions (PDFs) and methods used to develop and evaluate these PDFs. The PDFs provided in this EPA document are limited to those relating to exposure factors

  7. The Probability of Neonatal Respiratory Distress Syndrome as a Function of Gestational Age and Lecithin/Sphingomyelin Ratio

    Science.gov (United States)

    St. Clair, Caryn; Norwitz, Errol R.; Woensdregt, Karlijn; Cackovic, Michael; Shaw, Julia A.; Malkus, Herbert; Ehrenkranz, Richard A.; Illuzzi, Jessica L.

    2011-01-01

    We sought to define the risk of neonatal respiratory distress syndrome (RDS) as a function of both lecithin/sphingomyelin (L/S) ratio and gestational age. Amniotic fluid L/S ratio data were collected from consecutive women undergoing amniocentesis for fetal lung maturity at Yale-New Haven Hospital from January 1998 to December 2004. Women were included in the study if they delivered a live-born, singleton, nonanomalous infant within 72 hours of amniocentesis. The probability of RDS was modeled using multivariate logistic regression with L/S ratio and gestational age as predictors. A total of 210 mother-neonate pairs (8 RDS, 202 non-RDS) met criteria for analysis. Both gestational age and L/S ratio were independent predictors of RDS. A probability of RDS of 3% or less was noted at an L/S ratio cutoff of ≥3.4 at 34 weeks, ≥2.6 at 36 weeks, ≥1.6 at 38 weeks, and ≥1.2 at term. Under 34 weeks of gestation, the prevalence of RDS was so high that a probability of 3% or less was not observed by this model. These data describe a means of stratifying the probability of neonatal RDS using both gestational age and the L/S ratio and may aid in clinical decision making concerning the timing of delivery. PMID:18773379

  8. Exact probability function for bulk density and current in the asymmetric exclusion process

    Science.gov (United States)

    Depken, Martin; Stinchcombe, Robin

    2005-03-01

    We examine the asymmetric simple exclusion process with open boundaries, a paradigm of driven diffusive systems, having a nonequilibrium steady-state transition. We provide a full derivation and expanded discussion and digression on results previously reported briefly in M. Depken and R. Stinchcombe, Phys. Rev. Lett. 93, 040602 (2004). In particular we derive an exact form for the joint probability function for the bulk density and current, both for finite systems, and also in the thermodynamic limit. The resulting distribution is non-Gaussian, and while the fluctuations in the current are continuous at the continuous phase transitions, the density fluctuations are discontinuous. The derivations are done by using the standard operator algebraic techniques and by introducing a modified version of the original operator algebra. As a by-product of these considerations we also arrive at a very simple way of calculating the normalization constant appearing in the standard treatment with the operator algebra. Like the partition function in equilibrium systems, this normalization constant is shown to completely characterize the fluctuations, albeit in a very different manner.

  9. The perception of probability.

    Science.gov (United States)

    Gallistel, C R; Krishan, Monika; Liu, Ye; Miller, Reilly; Latham, Peter E

    2014-01-01

    We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  10. Surface energy and work function of elemental metals

    DEFF Research Database (Denmark)

    Skriver, Hans Lomholt; Rosengaard, N. M.

    1992-01-01

    and noble metals, as derived from the surface tension of liquid metals. In addition, they give work functions which agree with the limited experimental data obtained from single crystals to within 15%, and explain the smooth behavior of the experimental work functions of polycrystalline samples......We have performed an ab initio study of the surface energy and the work function for six close-packed surfaces of 40 elemental metals by means of a Green’s-function technique, based on the linear-muffin-tin-orbitals method within the tight-binding and atomic-sphere approximations. The results...... are in excellent agreement with a recent full-potential, all-electron, slab-supercell calculation of surface energies and work functions for the 4d metals. The present calculations explain the trend exhibited by the surface energies of the alkali, alkaline earth, divalent rare-earth, 3d, 4d, and 5d transition...

  11. Probability density function method for variable-density pressure-gradient-driven turbulence and mixing

    International Nuclear Information System (INIS)

    Bakosi, Jozsef; Ristorcelli, Raymond J.

    2010-01-01

    Probability density function (PDF) methods are extended to variable-density pressure-gradient-driven turbulence. We apply the new method to compute the joint PDF of density and velocity in a non-premixed binary mixture of different-density molecularly mixing fluids under gravity. The full time-evolution of the joint PDF is captured in the highly non-equilibrium flow: starting from a quiescent state, transitioning to fully developed turbulence and finally dissipated by molecular diffusion. High-Atwood-number effects (as distinguished from the Boussinesq case) are accounted for: both hydrodynamic turbulence and material mixing are treated at arbitrary density ratios, with the specific volume, mass flux and all their correlations in closed form. An extension of the generalized Langevin model, originally developed for the Lagrangian fluid particle velocity in constant-density shear-driven turbulence, is constructed for variable-density pressure-gradient-driven flows. The persistent small-scale anisotropy, a fundamentally 'non-Kolmogorovian' feature of flows under external acceleration forces, is captured by a tensorial diffusion term based on the external body force. The material mixing model for the fluid density, an active scalar, is developed based on the beta distribution. The beta-PDF is shown to be capable of capturing the mixing asymmetry and that it can accurately represent the density through transition, in fully developed turbulence and in the decay process. The joint model for hydrodynamics and active material mixing yields a time-accurate evolution of the turbulent kinetic energy and Reynolds stress anisotropy without resorting to gradient diffusion hypotheses, and represents the mixing state by the density PDF itself, eliminating the need for dubious mixing measures. Direct numerical simulations of the homogeneous Rayleigh-Taylor instability are used for model validation.

  12. On the Hitting Probability of Max-Stable Processes

    OpenAIRE

    Hofmann, Martin

    2012-01-01

    The probability that a max-stable process {\\eta} in C[0, 1] with identical marginal distribution function F hits x \\in R with 0 < F (x) < 1 is the hitting probability of x. We show that the hitting probability is always positive, unless the components of {\\eta} are completely dependent. Moreover, we consider the event that the paths of standard MSP hit some x \\in R twice and we give a sufficient condition for a positive probability of this event.

  13. Hamiltonian theories quantization based on a probability operator

    International Nuclear Information System (INIS)

    Entral'go, E.E.

    1986-01-01

    The quantization method with a linear reflection of classical coordinate-momentum-time functions Λ(q,p,t) at quantum operators in a space of quantum states ψ, is considered. The probability operator satisfies a system of equations representing the principles of dynamical and canonical correspondences between the classical and quantum theories. The quantization based on a probability operator leads to a quantum theory with a nonnegative joint coordinate-momentum distribution function for any state ψ. The main consequences of quantum mechanics with a probability operator are discussed in comparison with the generally accepted quantum and classical theories. It is shown that a probability operator leads to an appearance of some new notions called ''subquantum'' ones. Hence the quantum theory with a probability operator does not pretend to any complete description of physical reality in terms of classical variables and by this reason contains no problems like Einstein-Podolsky-Rosen paradox. The results of some concrete problems are given: a free particle, a harmonic oscillator, an electron in the Coulomb field. These results give hope on the possibility of an experimental verification of the quantization based on a probability operator

  14. An introduction to probability and stochastic processes

    CERN Document Server

    Melsa, James L

    2013-01-01

    Geared toward college seniors and first-year graduate students, this text is designed for a one-semester course in probability and stochastic processes. Topics covered in detail include probability theory, random variables and their functions, stochastic processes, linear system response to stochastic processes, Gaussian and Markov processes, and stochastic differential equations. 1973 edition.

  15. Theory of inelastic ion-atom scattering at low and intermediate energies

    Science.gov (United States)

    Schmid, G. B.; Garcia, J. D.

    1977-01-01

    Ab initio calculations are presented of inelastic energy loss and ionization phenomena associated with Ar(+)-Ar collisions at small distances of closest approach and for laboratory collision energies ranging from several keV to several hundred keV. Outer-shell excitations are handled statistically; inner-shell excitations are calculated from the viewpoint of quasidiabatic molecular orbital promotion. Auger electron yield, average state of ionization, and average inelastic energy loss are calculated per collision as a function of distance of closest approach of the collision partners for several laboratory collision energies. Average charge-state probabilities per collision partner are calculated as a function of the average inelastic energy loss per atom. It is shown that the structure in the data is due to the underlying structure in the inner-shell independent-electron quasimolecular promotion probabilities.

  16. 134Cs emission probabilities determination by gamma spectrometry

    Science.gov (United States)

    de Almeida, M. C. M.; Poledna, R.; Delgado, J. U.; Silva, R. L.; Araujo, M. T. F.; da Silva, C. J.

    2018-03-01

    The National Laboratory for Ionizing Radiation Metrology (LNMRI/IRD/CNEN) of Rio de Janeiro performed primary and secondary standardization of different radionuclides reaching satisfactory uncertainties. A solution of 134Cs radionuclide was purchased from commercial supplier to emission probabilities determination of some of its energies. 134Cs is a beta gamma emitter with 754 days of half-life. This radionuclide is used as standard in environmental, water and food control. It is also important to germanium detector calibration. The gamma emission probabilities (Pγ) were determined mainly for some energies of the 134Cs by efficiency curve method and the Pγ absolute uncertainties obtained were below 1% (k=1).

  17. Zeta-function approach to Casimir energy with singular potentials

    International Nuclear Information System (INIS)

    Khusnutdinov, Nail R.

    2006-01-01

    In the framework of zeta-function approach the Casimir energy for three simple model system: single delta potential, step function potential and three delta potentials are analyzed. It is shown that the energy contains contributions which are peculiar to the potentials. It is suggested to renormalize the energy using the condition that the energy of infinitely separated potentials is zero which corresponds to subtraction all terms of asymptotic expansion of zeta-function. The energy obtained in this way obeys all physically reasonable conditions. It is finite in the Dirichlet limit, and it may be attractive or repulsive depending on the strength of potential. The effective action is calculated, and it is shown that the surface contribution appears. The renormalization of the effective action is discussed

  18. Poisson statistics of PageRank probabilities of Twitter and Wikipedia networks

    Science.gov (United States)

    Frahm, Klaus M.; Shepelyansky, Dima L.

    2014-04-01

    We use the methods of quantum chaos and Random Matrix Theory for analysis of statistical fluctuations of PageRank probabilities in directed networks. In this approach the effective energy levels are given by a logarithm of PageRank probability at a given node. After the standard energy level unfolding procedure we establish that the nearest spacing distribution of PageRank probabilities is described by the Poisson law typical for integrable quantum systems. Our studies are done for the Twitter network and three networks of Wikipedia editions in English, French and German. We argue that due to absence of level repulsion the PageRank order of nearby nodes can be easily interchanged. The obtained Poisson law implies that the nearby PageRank probabilities fluctuate as random independent variables.

  19. On the evolution of the density probability density function in strongly self-gravitating systems

    International Nuclear Information System (INIS)

    Girichidis, Philipp; Konstandin, Lukas; Klessen, Ralf S.; Whitworth, Anthony P.

    2014-01-01

    The time evolution of the probability density function (PDF) of the mass density is formulated and solved for systems in free-fall using a simple approximate function for the collapse of a sphere. We demonstrate that a pressure-free collapse results in a power-law tail on the high-density side of the PDF. The slope quickly asymptotes to the functional form P V (ρ)∝ρ –1.54 for the (volume-weighted) PDF and P M (ρ)∝ρ –0.54 for the corresponding mass-weighted distribution. From the simple approximation of the PDF we derive analytic descriptions for mass accretion, finding that dynamically quiet systems with narrow density PDFs lead to retarded star formation and low star formation rates (SFRs). Conversely, strong turbulent motions that broaden the PDF accelerate the collapse causing a bursting mode of star formation. Finally, we compare our theoretical work with observations. The measured SFRs are consistent with our model during the early phases of the collapse. Comparison of observed column density PDFs with those derived from our model suggests that observed star-forming cores are roughly in free-fall.

  20. A simulation for energy dissipation in nuclear reactions

    International Nuclear Information System (INIS)

    Mshelia, E.D.; Ngadda, Y.H.

    1989-01-01

    A model for energy dissipation is presented which demonstrates energy transfer from a collective degree of freedom, represented by free motion, into intrinsic modes, represented by four coupled oscillators. The quantum mechanical probability amplitude for internal excitation is expressed as a multiple integral of a product of translational and intrinsic wavefunctions and exactly solved analytically. Its numerical values as a function of quantities of physical interest have been calculated, represented graphically and discussed. The results show that the probability distributions are peaked. (author)

  1. Energy functions for regularization algorithms

    Science.gov (United States)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  2. Continuous time random walk model with asymptotical probability density of waiting times via inverse Mittag-Leffler function

    Science.gov (United States)

    Liang, Yingjie; Chen, Wen

    2018-04-01

    The mean squared displacement (MSD) of the traditional ultraslow diffusion is a logarithmic function of time. Recently, the continuous time random walk model is employed to characterize this ultraslow diffusion dynamics by connecting the heavy-tailed logarithmic function and its variation as the asymptotical waiting time density. In this study we investigate the limiting waiting time density of a general ultraslow diffusion model via the inverse Mittag-Leffler function, whose special case includes the traditional logarithmic ultraslow diffusion model. The MSD of the general ultraslow diffusion model is analytically derived as an inverse Mittag-Leffler function, and is observed to increase even more slowly than that of the logarithmic function model. The occurrence of very long waiting time in the case of the inverse Mittag-Leffler function has the largest probability compared with the power law model and the logarithmic function model. The Monte Carlo simulations of one dimensional sample path of a single particle are also performed. The results show that the inverse Mittag-Leffler waiting time density is effective in depicting the general ultraslow random motion.

  3. Probability densities and the radon variable transformation theorem

    International Nuclear Information System (INIS)

    Ramshaw, J.D.

    1985-01-01

    D. T. Gillespie recently derived a random variable transformation theorem relating to the joint probability densities of functionally dependent sets of random variables. The present author points out that the theorem can be derived as an immediate corollary of a simpler and more fundamental relation. In this relation the probability density is represented as a delta function averaged over an unspecified distribution of unspecified internal random variables. The random variable transformation is derived from this relation

  4. The probability of the false vacuum decay

    International Nuclear Information System (INIS)

    Kiselev, V.; Selivanov, K.

    1983-01-01

    The closed expession for the probability of the false vacuum decay in (1+1) dimensions is given. The probability of false vacuum decay is expessed as the product of exponential quasiclassical factor and a functional determinant of the given form. The method for calcutation of this determinant is developed and a complete answer for (1+1) dimensions is given

  5. Negative probability in the framework of combined probability

    OpenAIRE

    Burgin, Mark

    2013-01-01

    Negative probability has found diverse applications in theoretical physics. Thus, construction of sound and rigorous mathematical foundations for negative probability is important for physics. There are different axiomatizations of conventional probability. So, it is natural that negative probability also has different axiomatic frameworks. In the previous publications (Burgin, 2009; 2010), negative probability was mathematically formalized and rigorously interpreted in the context of extende...

  6. Bayesian Probability Theory

    Science.gov (United States)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  7. Zirconium and Yttrium (p, d) Surrogate Nuclear Reactions: Measurement and determination of gamma-ray probabilities: Experimental Physics Report

    Energy Technology Data Exchange (ETDEWEB)

    Burke, J. T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hughes, R. O. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Escher, J. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Scielzo, N. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Casperson, R. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ressler, J. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Saastamoinen, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ota, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Park, H. I. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ross, T. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McCleskey, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McCleskey, E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Austin, R. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Rapisarda, G. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-21

    This technical report documents the surrogate reaction method and experimental results used to determine the desired neutron induced cross sections of 87Y(n,g) and the known 90Zr(n,g) cross section. This experiment was performed at the STARLiTeR apparatus located at Texas A&M Cyclotron Institute using the K150 Cyclotron which produced a 28.56 MeV proton beam. The proton beam impinged on Y and Zr targets to produce the nuclear reactions 89Y(p,d)88Y and 92Zr(p,d)91Zr. Both particle singles data and particle-gamma ray coincident data were measured during the experiment. This data was used to determine the γ-ray probability as a function of energy for these reactions. The results for the γ-ray probabilities as a function of energy for both these nuclei are documented here. For completeness, extensive tabulated and graphical results are provided in the appendices.

  8. Technical report. The application of probability-generating functions to linear-quadratic radiation survival curves.

    Science.gov (United States)

    Kendal, W S

    2000-04-01

    To illustrate how probability-generating functions (PGFs) can be employed to derive a simple probabilistic model for clonogenic survival after exposure to ionizing irradiation. Both repairable and irreparable radiation damage to DNA were assumed to occur by independent (Poisson) processes, at intensities proportional to the irradiation dose. Also, repairable damage was assumed to be either repaired or further (lethally) injured according to a third (Bernoulli) process, with the probability of lethal conversion being directly proportional to dose. Using the algebra of PGFs, these three processes were combined to yield a composite PGF that described the distribution of lethal DNA lesions in irradiated cells. The composite PGF characterized a Poisson distribution with mean, chiD+betaD2, where D was dose and alpha and beta were radiobiological constants. This distribution yielded the conventional linear-quadratic survival equation. To test the composite model, the derived distribution was used to predict the frequencies of multiple chromosomal aberrations in irradiated human lymphocytes. The predictions agreed well with observation. This probabilistic model was consistent with single-hit mechanisms, but it was not consistent with binary misrepair mechanisms. A stochastic model for radiation survival has been constructed from elementary PGFs that exactly yields the linear-quadratic relationship. This approach can be used to investigate other simple probabilistic survival models.

  9. Energy transfer upon collision of selectively excited CO2 molecules: State-to-state cross sections and probabilities for modeling of atmospheres and gaseous flows.

    Science.gov (United States)

    Lombardi, A; Faginas-Lago, N; Pacifici, L; Grossi, G

    2015-07-21

    Carbon dioxide molecules can store and release tens of kcal/mol upon collisions, and such an energy transfer strongly influences the energy disposal and the chemical processes in gases under the extreme conditions typical of plasmas and hypersonic flows. Moreover, the energy transfer involving CO2 characterizes the global dynamics of the Earth-atmosphere system and the energy balance of other planetary atmospheres. Contemporary developments in kinetic modeling of gaseous mixtures are connected to progress in the description of the energy transfer, and, in particular, the attempts to include non-equilibrium effects require to consider state-specific energy exchanges. A systematic study of the state-to-state vibrational energy transfer in CO2 + CO2 collisions is the focus of the present work, aided by a theoretical and computational tool based on quasiclassical trajectory simulations and an accurate full-dimension model of the intermolecular interactions. In this model, the accuracy of the description of the intermolecular forces (that determine the probability of energy transfer in molecular collisions) is enhanced by explicit account of the specific effects of the distortion of the CO2 structure due to vibrations. Results show that these effects are important for the energy transfer probabilities. Moreover, the role of rotational and vibrational degrees of freedom is found to be dominant in the energy exchange, while the average contribution of translations, under the temperature and energy conditions considered, is negligible. Remarkable is the fact that the intramolecular energy transfer only involves stretching and bending, unless one of the colliding molecules has an initial symmetric stretching quantum number greater than a threshold value estimated to be equal to 7.

  10. Enhancement of biodiversity in energy farming: towards a functional approach

    International Nuclear Information System (INIS)

    Londo, M.; Dekker, J.

    1997-01-01

    When biomass is a substantial sustainable energy source, and special energy crops are grown on a large scale, land use and the environment of agriculture will be affected. Of these effects, biodiversity deserves special attention. The enhancement of biodiversity in energy farming via standard setting is the overall purpose of this project. In this study, the potential functionality of biodiversity in energy farming is proposed as a way of operationalising the rather abstract and broad concept of biodiversity. Functions of biodiversity are reviewed, and examples of functions are worked out, based on the current literature of nature in energy farming systems. (author)

  11. Approximations to the Probability of Failure in Random Vibration by Integral Equation Methods

    DEFF Research Database (Denmark)

    Nielsen, Søren R.K.; Sørensen, John Dalsgaard

    Close approximations to the first passage probability of failure in random vibration can be obtained by integral equation methods. A simple relation exists between the first passage probability density function and the distribution function for the time interval spent below a barrier before...... passage probability density. The results of the theory agree well with simulation results for narrow banded processes dominated by a single frequency, as well as for bimodal processes with 2 dominating frequencies in the structural response....... outcrossing. An integral equation for the probability density function of the time interval is formulated, and adequate approximations for the kernel are suggested. The kernel approximation results in approximate solutions for the probability density function of the time interval, and hence for the first...

  12. Functionalization of graphene for efficient energy conversion and storage.

    Science.gov (United States)

    Dai, Liming

    2013-01-15

    As global energy consumption accelerates at an alarming rate, the development of clean and renewable energy conversion and storage systems has become more important than ever. Although the efficiency of energy conversion and storage devices depends on a variety of factors, their overall performance strongly relies on the structure and properties of the component materials. Nanotechnology has opened up new frontiers in materials science and engineering to meet this challenge by creating new materials, particularly carbon nanomaterials, for efficient energy conversion and storage. As a building block for carbon materials of all other dimensionalities (such as 0D buckyball, 1D nanotube, 3D graphite), the two-dimensional (2D) single atomic carbon sheet of graphene has emerged as an attractive candidate for energy applications due to its unique structure and properties. Like other materials, however, a graphene-based material that possesses desirable bulk properties rarely features the surface characteristics required for certain specific applications. Therefore, surface functionalization is essential, and researchers have devised various covalent and noncovalent chemistries for making graphene materials with the bulk and surface properties needed for efficient energy conversion and storage. In this Account, I summarize some of our new ideas and strategies for the controlled functionalization of graphene for the development of efficient energy conversion and storage devices, such as solar cells, fuel cells, supercapacitors, and batteries. The dangling bonds at the edge of graphene can be used for the covalent attachment of various chemical moieties while the graphene basal plane can be modified via either covalent or noncovalent functionalization. The asymmetric functionalization of the two opposite surfaces of individual graphene sheets with different moieties can lead to the self-assembly of graphene sheets into hierarchically structured materials. Judicious

  13. Functional materials for energy-efficient buildings

    Directory of Open Access Journals (Sweden)

    Ebert H.-P

    2015-01-01

    Full Text Available The substantial improving of the energy efficiency is essential to meet the ambitious energy goals of the EU. About 40% of the European energy consumption belongs to the building sector. Therefore the reduction of the energy demand of the existing building stock is one of the key measures to deliver a substantial contribution to reduce CO2-emissions of our society. Buildings of the future have to be efficient in respect to energy consumption for construction and operation. Current research activities are focused on the development of functional materials with outstanding thermal and optical properties to provide, for example, slim thermally superinsulated facades, highly integrated heat storage systems or adaptive building components. In this context it is important to consider buildings as entities which fulfill energy and comfort claims as well as aesthetic aspects of a sustainable architecture.

  14. Functional materials for energy-efficient buildings

    Science.gov (United States)

    Ebert, H.-P.

    2015-08-01

    The substantial improving of the energy efficiency is essential to meet the ambitious energy goals of the EU. About 40% of the European energy consumption belongs to the building sector. Therefore the reduction of the energy demand of the existing building stock is one of the key measures to deliver a substantial contribution to reduce CO2-emissions of our society. Buildings of the future have to be efficient in respect to energy consumption for construction and operation. Current research activities are focused on the development of functional materials with outstanding thermal and optical properties to provide, for example, slim thermally superinsulated facades, highly integrated heat storage systems or adaptive building components. In this context it is important to consider buildings as entities which fulfill energy and comfort claims as well as aesthetic aspects of a sustainable architecture.

  15. Decomposition of conditional probability for high-order symbolic Markov chains

    Science.gov (United States)

    Melnik, S. S.; Usatenko, O. V.

    2017-07-01

    The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.

  16. Probability Estimation in the Framework of Intuitionistic Fuzzy Evidence Theory

    Directory of Open Access Journals (Sweden)

    Yafei Song

    2015-01-01

    Full Text Available Intuitionistic fuzzy (IF evidence theory, as an extension of Dempster-Shafer theory of evidence to the intuitionistic fuzzy environment, is exploited to process imprecise and vague information. Since its inception, much interest has been concentrated on IF evidence theory. Many works on the belief functions in IF information systems have appeared. Although belief functions on the IF sets can deal with uncertainty and vagueness well, it is not convenient for decision making. This paper addresses the issue of probability estimation in the framework of IF evidence theory with the hope of making rational decision. Background knowledge about evidence theory, fuzzy set, and IF set is firstly reviewed, followed by introduction of IF evidence theory. Axiomatic properties of probability distribution are then proposed to assist our interpretation. Finally, probability estimations based on fuzzy and IF belief functions together with their proofs are presented. It is verified that the probability estimation method based on IF belief functions is also potentially applicable to classical evidence theory and fuzzy evidence theory. Moreover, IF belief functions can be combined in a convenient way once they are transformed to interval-valued possibilities.

  17. Evaluation of Presumed Probability-Density-Function Models in Non-Premixed Flames by using Large Eddy Simulation

    International Nuclear Information System (INIS)

    Cao Hong-Jun; Zhang Hui-Qiang; Lin Wen-Yi

    2012-01-01

    Four kinds of presumed probability-density-function (PDF) models for non-premixed turbulent combustion are evaluated in flames with various stoichiometric mixture fractions by using large eddy simulation (LES). The LES code is validated by the experimental data of a classical turbulent jet flame (Sandia flame D). The mean and rms temperatures obtained by the presumed PDF models are compared with the LES results. The β-function model achieves a good prediction for different flames. The predicted rms temperature by using the double-δ function model is very small and unphysical in the vicinity of the maximum mean temperature. The clip-Gaussian model and the multi-δ function model make a worse prediction of the extremely fuel-rich or fuel-lean side due to the clip at the boundary of the mixture fraction space. The results also show that the overall prediction performance of presumed PDF models is better at mediate stoichiometric mixture fractions than that at very small or very large ones. (fundamental areas of phenomenology(including applications))

  18. On the distribution functions in the quantum mechanics and Wigner functions

    International Nuclear Information System (INIS)

    Kuz'menkov, L.S.; Maksimov, S.G.

    2002-01-01

    The problem on the distribution functions, leading to the similar local values of the particles number, pulse and energy, as in the quantum mechanics, is formulated and solved. The method is based on the quantum-mechanical determination of the probability density. The derived distribution function coincides with the Wigner function only for the spatial-homogeneous systems. The Bogolyubov equations chain, the Liouville equation for the distribution quantum functions by any number of particles in the system, the general expression for the tensor of the dielectric permittivity of the plasma electron component are obtained [ru

  19. Kinetic-energy functionals studied by surface calculations

    DEFF Research Database (Denmark)

    Vitos, Levente; Skriver, Hans Lomholt; Kollár, J.

    1998-01-01

    The self-consistent jellium model of metal surfaces is used to study the accuracy of a number of semilocal kinetic-energy functionals for independent particles. It is shown that the poor accuracy exhibited by the gradient expansion approximation and most of the semiempirical functionals in the lo...... density, high gradient limit may be subtantially improved by including locally a von Weizsacker term. Based on this, we propose a simple one-parameter Pade's approximation, which reproduces the exact Kohn-Sham surface kinetic energy over the entire range of metallic densities....

  20. Probability distribution function values in mobile phones;Valores de funciones de distribución de probabilidad en el teléfono móvil

    Directory of Open Access Journals (Sweden)

    Luis Vicente Chamorro Marcillllo

    2013-06-01

    Full Text Available Engineering, within its academic and application forms, as well as any formal research work requires the use of statistics and every inferential statistical analysis requires the use of values of probability distribution functions that are generally available in tables. Generally, management of those tables have physical problems (wasteful transport and wasteful consultation and operational (incomplete lists and limited accuracy. The study, “Probability distribution function values in mobile phones”, permitted determining – through a needs survey applied to students involved in statistics studies at Universidad de Nariño – that the best known and most used values correspond to Chi-Square, Binomial, Student’s t, and Standard Normal distributions. Similarly, it showed user’s interest in having the values in question within an alternative means to correct, at least in part, the problems presented by “the famous tables”. To try to contribute to the solution, we built software that allows immediately and dynamically obtaining the values of the probability distribution functions most commonly used by mobile phones.

  1. Energies and wave functions of an off-centre donor in hemispherical quantum dot: Two-dimensional finite difference approach and ritz variational principle

    Energy Technology Data Exchange (ETDEWEB)

    Nakra Mohajer, Soukaina; El Harouny, El Hassan [Laboratoire de Physique de la Matière Condensée, Département de Physique, Faculté des Sciences, Université Abdelmalek Essaadi, B.P. 2121 M’Hannech II, 93030 Tétouan (Morocco); Ibral, Asmaa [Equipe d’Optique et Electronique du Solide, Département de Physique, Faculté des Sciences, Université Chouaïb Doukkali, B. P. 20 El Jadida Principale, El Jadida (Morocco); Laboratoire d’Instrumentation, Mesure et Contrôle, Département de Physique, Faculté des Sciences, Université Chouaïb Doukkali, B. P. 20 El Jadida Principale, El Jadida (Morocco); El Khamkhami, Jamal [Laboratoire de Physique de la Matière Condensée, Département de Physique, Faculté des Sciences, Université Abdelmalek Essaadi, B.P. 2121 M’Hannech II, 93030 Tétouan (Morocco); and others

    2016-09-15

    Eigenvalues equation solutions of a hydrogen-like donor impurity, confined in a hemispherical quantum dot deposited on a wetting layer and capped by an insulating matrix, are determined in the framework of the effective mass approximation. Conduction band alignments at interfaces between quantum dot and surrounding materials are described by infinite height barriers. Ground and excited states energies and wave functions are determined analytically and via one-dimensional finite difference approach in case of an on-center donor. Donor impurity is then moved from center to pole of hemispherical quantum dot and eigenvalues equation is solved via Ritz variational principle, using a trial wave function where Coulomb attraction between electron and ionized donor is taken into account, and by two-dimensional finite difference approach. Numerical codes developed enable access to variations of donor total energy, binding energy, Coulomb correlation parameter, spatial extension and radial probability density with respect to hemisphere radius and impurity position inside the quantum dot.

  2. Energies and wave functions of an off-centre donor in hemispherical quantum dot: Two-dimensional finite difference approach and ritz variational principle

    International Nuclear Information System (INIS)

    Nakra Mohajer, Soukaina; El Harouny, El Hassan; Ibral, Asmaa; El Khamkhami, Jamal

    2016-01-01

    Eigenvalues equation solutions of a hydrogen-like donor impurity, confined in a hemispherical quantum dot deposited on a wetting layer and capped by an insulating matrix, are determined in the framework of the effective mass approximation. Conduction band alignments at interfaces between quantum dot and surrounding materials are described by infinite height barriers. Ground and excited states energies and wave functions are determined analytically and via one-dimensional finite difference approach in case of an on-center donor. Donor impurity is then moved from center to pole of hemispherical quantum dot and eigenvalues equation is solved via Ritz variational principle, using a trial wave function where Coulomb attraction between electron and ionized donor is taken into account, and by two-dimensional finite difference approach. Numerical codes developed enable access to variations of donor total energy, binding energy, Coulomb correlation parameter, spatial extension and radial probability density with respect to hemisphere radius and impurity position inside the quantum dot.

  3. Faster exact Markovian probability functions for motif occurrences: a DFA-only approach.

    Science.gov (United States)

    Ribeca, Paolo; Raineri, Emanuele

    2008-12-15

    The computation of the statistical properties of motif occurrences has an obviously relevant application: patterns that are significantly over- or under-represented in genomes or proteins are interesting candidates for biological roles. However, the problem is computationally hard; as a result, virtually all the existing motif finders use fast but approximate scoring functions, in spite of the fact that they have been shown to produce systematically incorrect results. A few interesting exact approaches are known, but they are very slow and hence not practical in the case of realistic sequences. We give an exact solution, solely based on deterministic finite-state automata (DFA), to the problem of finding the whole relevant part of the probability distribution function of a simple-word motif in a homogeneous (biological) sequence. Out of that, the z-value can always be computed, while the P-value can be obtained either when it is not too extreme with respect to the number of floating-point digits available in the implementation, or when the number of pattern occurrences is moderately low. In particular, the time complexity of the algorithms for Markov models of moderate order (0 manage to obtain an algorithm which is both easily interpretable and efficient. This approach can be used for exact statistical studies of very long genomes and protein sequences, as we illustrate with some examples on the scale of the human genome.

  4. Free energy distribution function of a random Ising ferromagnet

    International Nuclear Information System (INIS)

    Dotsenko, Victor; Klumov, Boris

    2012-01-01

    We study the free energy distribution function of a weakly disordered Ising ferromagnet in terms of the D-dimensional random temperature Ginzburg–Landau Hamiltonian. It is shown that besides the usual Gaussian 'body' this distribution function exhibits non-Gaussian tails both in the paramagnetic and in the ferromagnetic phases. Explicit asymptotic expressions for these tails are derived. It is demonstrated that the tails are strongly asymmetric: the left tail (for large negative values of the free energy) is much slower than the right one (for large positive values of the free energy). It is argued that at the critical point the free energy of the random Ising ferromagnet in dimensions D < 4 is described by a non-trivial universal distribution function which is non-self-averaging

  5. Energy expressions in density-functional theory using line integrals.

    NARCIS (Netherlands)

    van Leeuwen, R.; Baerends, E.J.

    1995-01-01

    In this paper we will address the question of how to obtain energies from functionals when only the functional derivative is given. It is shown that one can obtain explicit expressions for the exchange-correlation energy from approximate exchange-correlation potentials using line integrals along

  6. The influence of the electron wave function on the Pt Lsub(I) and Lsub(III) ionization probabilities by 3.6 MeV He impact

    International Nuclear Information System (INIS)

    Ullrich, J.; Dangendorf, V.; Dexheimer, K.; Do, K.; Kelbch, C.; Kelbch, S.; Schadt, W.; Schmidt-Boecking, H.; Stiebing, K.E.; Roesel, F.; Trautmann, D.

    1986-01-01

    For 3.6 MeV He impact the Lsub(I) and Lsub(III) subshell ionization probabilities of Pt have been measured. Due to relativistic effects in the electron wave functions, the Lsub(I) subshell ionization probability Isub(LI)(b) is strong enhanced at small impact parameters exceeding even Isub(LIII)(b) in nice agreement with the SCA theory. (orig.)

  7. Nash equilibrium with lower probabilities

    DEFF Research Database (Denmark)

    Groes, Ebbe; Jacobsen, Hans Jørgen; Sloth, Birgitte

    1998-01-01

    We generalize the concept of Nash equilibrium in mixed strategies for strategic form games to allow for ambiguity in the players' expectations. In contrast to other contributions, we model ambiguity by means of so-called lower probability measures or belief functions, which makes it possible...

  8. Pade approximant calculations for neutron escape probability

    International Nuclear Information System (INIS)

    El Wakil, S.A.; Saad, E.A.; Hendi, A.A.

    1984-07-01

    The neutron escape probability from a non-multiplying slab containing internal source is defined in terms of a functional relation for the scattering function for the diffuse reflection problem. The Pade approximant technique is used to get numerical results which compare with exact results. (author)

  9. The H-Function and Probability Density Functions of Certain Algebraic Combinations of Independent Random Variables with H-Function Probability Distribution

    Science.gov (United States)

    1981-05-01

    Education, 10 (2), A45-A49 (1976). 48. Rain&, R. K., and C. L. Kaul (Koul), "Some inequalities involving the Fox’s H- function," Proceedings of the Indian...1973). 51. Srivastava , A., and K. C. Gupta, "On certain recurrence rela- tions," Mathematische Nachrichten, 46, 13- 23 (1970), 49, 187- 197 (1971). 52...34 Vilnana Parishad Anusandhan Patrika, 10, 205- 217 (1967). 69. Gupta, K. C., and A. Srivastava , "On finite expansions for the H- function," Indian Journal

  10. Using Fuzzy Probability Weights in Cumulative Prospect Theory

    Directory of Open Access Journals (Sweden)

    Užga-Rebrovs Oļegs

    2016-12-01

    Full Text Available During the past years, a rapid growth has been seen in the descriptive approaches to decision choice. As opposed to normative expected utility theory, these approaches are based on the subjective perception of probabilities by the individuals, which takes place in real situations of risky choice. The modelling of this kind of perceptions is made on the basis of probability weighting functions. In cumulative prospect theory, which is the focus of this paper, decision prospect outcome weights are calculated using the obtained probability weights. If the value functions are constructed in the sets of positive and negative outcomes, then, based on the outcome value evaluations and outcome decision weights, generalised evaluations of prospect value are calculated, which are the basis for choosing an optimal prospect.

  11. Random phenomena fundamentals of probability and statistics for engineers

    CERN Document Server

    Ogunnaike, Babatunde A

    2009-01-01

    PreludeApproach PhilosophyFour Basic PrinciplesI FoundationsTwo Motivating ExamplesYield Improvement in a Chemical ProcessQuality Assurance in a Glass Sheet Manufacturing ProcessOutline of a Systematic ApproachRandom Phenomena, Variability, and UncertaintyTwo Extreme Idealizations of Natural PhenomenaRandom Mass PhenomenaIntroducing ProbabilityThe Probabilistic FrameworkII ProbabilityFundamentals of Probability TheoryBuilding BlocksOperationsProbabilityConditional ProbabilityIndependenceRandom Variables and DistributionsDistributionsMathematical ExpectationCharacterizing DistributionsSpecial Derived Probability FunctionsMultidimensional Random VariablesDistributions of Several Random VariablesDistributional Characteristics of Jointly Distributed Random VariablesRandom Variable TransformationsSingle Variable TransformationsBivariate TransformationsGeneral Multivariate TransformationsApplication Case Studies I: ProbabilityMendel and HeredityWorld War II Warship Tactical Response Under AttackIII DistributionsIde...

  12. Fitness Probability Distribution of Bit-Flip Mutation.

    Science.gov (United States)

    Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique

    2015-01-01

    Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.

  13. Probability

    CERN Document Server

    Shiryaev, A N

    1996-01-01

    This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, ergodic theory, weak convergence of probability measures, stationary stochastic processes, and the Kalman-Bucy filter Many examples are discussed in detail, and there are a large number of exercises The book is accessible to advanced undergraduates and can be used as a text for self-study This new edition contains substantial revisions and updated references The reader will find a deeper study of topics such as the distance between probability measures, metrization of weak convergence, and contiguity of probability measures Proofs for a number of some important results which were merely stated in the first edition have been added The author included new material on the probability of large deviations, and on the central limit theorem for sums of dependent random variables

  14. Accurate potential energy curves, spectroscopic parameters, transition dipole moments, and transition probabilities of 21 low-lying states of the CO+ cation

    Science.gov (United States)

    Xing, Wei; Shi, Deheng; Zhang, Jicai; Sun, Jinfeng; Zhu, Zunlue

    2018-05-01

    This paper calculates the potential energy curves of 21 Λ-S and 42 Ω states, which arise from the first two dissociation asymptotes of the CO+ cation. The calculations are conducted using the complete active space self-consistent field method, which is followed by the valence internally contracted multireference configuration interaction approach with the Davidson correction. To improve the reliability and accuracy of the potential energy curves, core-valence correlation and scalar relativistic corrections, as well as the extrapolation of potential energies to the complete basis set limit are taken into account. The spectroscopic parameters and vibrational levels are determined. The spin-orbit coupling effect on the spectroscopic parameters and vibrational levels is evaluated. To better study the transition probabilities, the transition dipole moments are computed. The Franck-Condon factors and Einstein coefficients of some emissions are calculated. The radiative lifetimes are determined for a number of vibrational levels of several states. The transitions between different Λ-S states are evaluated. Spectroscopic routines for observing these states are proposed. The spectroscopic parameters, vibrational levels, transition dipole moments, and transition probabilities reported in this paper can be considered to be very reliable and can be used as guidelines for detecting these states in an appropriate spectroscopy experiment, especially for the states that were very difficult to observe or were not detected in previous experiments.

  15. The probability of an encounter of two Brownian particles before escape

    International Nuclear Information System (INIS)

    Holcman, D; Kupka, I

    2009-01-01

    We study the probability of meeting of two Brownian particles before one of them exits a finite interval. We obtain an explicit expression for the probability as a function of the initial distance between the two particles using the Weierstrass elliptic function. We also find the law of the meeting location. Brownian simulations show the accuracy of our analysis. Finally, we discuss some applications to the probability that a double-strand DNA break repairs in confined environments.

  16. Kinetic Analysis of Isothermal Decomposition Process of Sodium Bicarbonate Using the Weibull Probability Function—Estimation of Density Distribution Functions of the Apparent Activation Energies

    Science.gov (United States)

    Janković, Bojan

    2009-10-01

    The decomposition process of sodium bicarbonate (NaHCO3) has been studied by thermogravimetry in isothermal conditions at four different operating temperatures (380 K, 400 K, 420 K, and 440 K). It was found that the experimental integral and differential conversion curves at the different operating temperatures can be successfully described by the isothermal Weibull distribution function with a unique value of the shape parameter ( β = 1.07). It was also established that the Weibull distribution parameters ( β and η) show independent behavior on the operating temperature. Using the integral and differential (Friedman) isoconversional methods, in the conversion (α) range of 0.20 ≤ α ≤ 0.80, the apparent activation energy ( E a ) value was approximately constant ( E a, int = 95.2 kJmol-1 and E a, diff = 96.6 kJmol-1, respectively). The values of E a calculated by both isoconversional methods are in good agreement with the value of E a evaluated from the Arrhenius equation (94.3 kJmol-1), which was expressed through the scale distribution parameter ( η). The Málek isothermal procedure was used for estimation of the kinetic model for the investigated decomposition process. It was found that the two-parameter Šesták-Berggren (SB) autocatalytic model best describes the NaHCO3 decomposition process with the conversion function f(α) = α0.18(1-α)1.19. It was also concluded that the calculated density distribution functions of the apparent activation energies ( ddfE a ’s) are not dependent on the operating temperature, which exhibit the highly symmetrical behavior (shape factor = 1.00). The obtained isothermal decomposition results were compared with corresponding results of the nonisothermal decomposition process of NaHCO3.

  17. Nuclear-medium renormalization of the probabilities of the absorption of slow negative pions by nuclei

    International Nuclear Information System (INIS)

    Ivankov, Yu. V.; Kadmensky, S. G.

    1997-01-01

    Regions of admissible values of four constants determined by the ratios of the reduced probabilities of the absorption of slow negative pions by nucleon pairs in a nucleus to the analogous probabilities of absorption by free nucleon pairs are found from comparison of theoretical results and experimental data on the nuclear widths of the levels of π - mesic atoms and on the yields and energy distributions of nucleons and correlated nn and up pairs emitted in the absorption of slow negative pions by nuclei. It is concluded that some of these constants considerably deviate from unity. This suggests that a nuclear medium strongly affects hadron propagators or vertex functions determining negative-pion absorption by a nucleon pair at small distances

  18. Energy transfer upon collision of selectively excited CO{sub 2} molecules: State-to-state cross sections and probabilities for modeling of atmospheres and gaseous flows

    Energy Technology Data Exchange (ETDEWEB)

    Lombardi, A., E-mail: ebiu2005@gmail.com; Faginas-Lago, N.; Pacifici, L.; Grossi, G. [Dipartimento di Chimica, Università di Perugia, via Elce di Sotto 8, 06123 Perugia (Italy)

    2015-07-21

    Carbon dioxide molecules can store and release tens of kcal/mol upon collisions, and such an energy transfer strongly influences the energy disposal and the chemical processes in gases under the extreme conditions typical of plasmas and hypersonic flows. Moreover, the energy transfer involving CO{sub 2} characterizes the global dynamics of the Earth-atmosphere system and the energy balance of other planetary atmospheres. Contemporary developments in kinetic modeling of gaseous mixtures are connected to progress in the description of the energy transfer, and, in particular, the attempts to include non-equilibrium effects require to consider state-specific energy exchanges. A systematic study of the state-to-state vibrational energy transfer in CO{sub 2} + CO{sub 2} collisions is the focus of the present work, aided by a theoretical and computational tool based on quasiclassical trajectory simulations and an accurate full-dimension model of the intermolecular interactions. In this model, the accuracy of the description of the intermolecular forces (that determine the probability of energy transfer in molecular collisions) is enhanced by explicit account of the specific effects of the distortion of the CO{sub 2} structure due to vibrations. Results show that these effects are important for the energy transfer probabilities. Moreover, the role of rotational and vibrational degrees of freedom is found to be dominant in the energy exchange, while the average contribution of translations, under the temperature and energy conditions considered, is negligible. Remarkable is the fact that the intramolecular energy transfer only involves stretching and bending, unless one of the colliding molecules has an initial symmetric stretching quantum number greater than a threshold value estimated to be equal to 7.

  19. Fermi-Dirac function and energy gap

    OpenAIRE

    Bondarev, Boris

    2013-01-01

    Medium field method is applied for studying valence electron behavior in metals. When different wave-vector electrons are attracted at low temperatures, distribution function gets discontinued. As a result, a specific energy gap occurs.

  20. Fractal supersymmetric QM, Geometric Probability and the Riemann Hypothesis

    CERN Document Server

    Castro, C

    2004-01-01

    The Riemann's hypothesis (RH) states that the nontrivial zeros of the Riemann zeta-function are of the form $ s_n =1/2+i\\lambda_n $. Earlier work on the RH based on supersymmetric QM, whose potential was related to the Gauss-Jacobi theta series, allows to provide the proper framework to construct the well defined algorithm to compute the probability to find a zero (an infinity of zeros) in the critical line. Geometric probability theory furnishes the answer to the very difficult question whether the probability that the RH is true is indeed equal to unity or not. To test the validity of this geometric probabilistic framework to compute the probability if the RH is true, we apply it directly to the the hyperbolic sine function $ \\sinh (s) $ case which obeys a trivial analog of the RH (the HSRH). Its zeros are equally spaced in the imaginary axis $ s_n = 0 + i n \\pi $. The geometric probability to find a zero (and an infinity of zeros) in the imaginary axis is exactly unity. We proceed with a fractal supersymme...

  1. Probability distribution of wave packet delay time for strong overlapping of resonance levels

    International Nuclear Information System (INIS)

    Lyuboshits, V.L.

    1983-01-01

    Time behaviour of nuclear reactions in the case of high level densities is investigated basing on the theory of overlapping resonances. In the framework of a model of n equivalent channels an analytical expression is obtained for the probability distribution function for wave packet delay time at the compound nucleus production. It is shown that at strong overlapping of the resonance levels the relative fluctuation of the delay time is small at the stage of compound nucleus production. A possible increase in the duration of nuclear reactions with the excitation energy rise is discussed

  2. Identifying functional reorganization of spelling networks: an individual peak probability comparison approach

    Science.gov (United States)

    Purcell, Jeremy J.; Rapp, Brenda

    2013-01-01

    Previous research has shown that damage to the neural substrates of orthographic processing can lead to functional reorganization during reading (Tsapkini et al., 2011); in this research we ask if the same is true for spelling. To examine the functional reorganization of spelling networks we present a novel three-stage Individual Peak Probability Comparison (IPPC) analysis approach for comparing the activation patterns obtained during fMRI of spelling in a single brain-damaged individual with dysgraphia to those obtained in a set of non-impaired control participants. The first analysis stage characterizes the convergence in activations across non-impaired control participants by applying a technique typically used for characterizing activations across studies: Activation Likelihood Estimate (ALE) (Turkeltaub et al., 2002). This method was used to identify locations that have a high likelihood of yielding activation peaks in the non-impaired participants. The second stage provides a characterization of the degree to which the brain-damaged individual's activations correspond to the group pattern identified in Stage 1. This involves performing a Mahalanobis distance statistics analysis (Tsapkini et al., 2011) that compares each of a control group's peak activation locations to the nearest peak generated by the brain-damaged individual. The third stage evaluates the extent to which the brain-damaged individual's peaks are atypical relative to the range of individual variation among the control participants. This IPPC analysis allows for a quantifiable, statistically sound method for comparing an individual's activation pattern to the patterns observed in a control group and, thus, provides a valuable tool for identifying functional reorganization in a brain-damaged individual with impaired spelling. Furthermore, this approach can be applied more generally to compare any individual's activation pattern with that of a set of other individuals. PMID:24399981

  3. Joint Probability Distributions for a Class of Non-Markovian Processes

    OpenAIRE

    Baule, A.; Friedrich, R.

    2004-01-01

    We consider joint probability distributions for the class of coupled Langevin equations introduced by Fogedby [H.C. Fogedby, Phys. Rev. E 50, 1657 (1994)]. We generalize well-known results for the single time probability distributions to the case of N-time joint probability distributions. It is shown that these probability distribution functions can be obtained by an integral transform from distributions of a Markovian process. The integral kernel obeys a partial differential equation with fr...

  4. Estimating reliability of degraded system based on the probability density evolution with multi-parameter

    Directory of Open Access Journals (Sweden)

    Jiang Ge

    2017-01-01

    Full Text Available System degradation was usually caused by multiple-parameter degradation. The assessment result of system reliability by universal generating function was low accurate when compared with the Monte Carlo simulation. And the probability density function of the system output performance cannot be got. So the reliability assessment method based on the probability density evolution with multi-parameter was presented for complexly degraded system. Firstly, the system output function was founded according to the transitive relation between component parameters and the system output performance. Then, the probability density evolution equation based on the probability conservation principle and the system output function was established. Furthermore, probability distribution characteristics of the system output performance was obtained by solving differential equation. Finally, the reliability of the degraded system was estimated. This method did not need to discrete the performance parameters and can establish continuous probability density function of the system output performance with high calculation efficiency and low cost. Numerical example shows that this method is applicable to evaluate the reliability of multi-parameter degraded system.

  5. Exchanging and Storing Energy. Reducing Energy Demand through Heat Exchange between Functions and Temporary Storage

    Energy Technology Data Exchange (ETDEWEB)

    Sillem, E.

    2011-06-15

    As typical office buildings from the nineties have large heating and cooling installations to provide heat or cold wherever and whenever needed, more recent office buildings have almost no demand for heating due to high internal heat loads caused by people, lighting and office appliances and because of the great thermal qualities of the contemporary building envelope. However, these buildings still have vast cooling units to cool down servers and other energy consuming installations. At the same time other functions such as dwellings, swimming pools, sporting facilities, archives and museums still need to be heated most of the year. In the current building market there is an increasing demand for mixed-use buildings or so called hybrid buildings. The Science Business Centre is no different and houses a conference centre, offices, a museum, archives, an exhibition space and a restaurant. From the initial program brief it seemed that the building will simultaneously house functions that need cooling most of the year and functions that will need to be heated the majority of the year. Can this building be equipped with a 'micro heating and cooling network' and where necessary temporarily store energy? With this idea a research proposal was formulated. How can the demand for heating and cooling of the Science Business Centre be reduced by using energy exchange between different kinds of functions and by temporarily storing energy? In conclusion the research led to: four optimized installation concepts; short term energy storage in pavilion concept and museum; energy exchange between the restaurant and archives; energy exchange between the server space and the offices; the majority of heat and cold will be extracted from the soil (long term energy storage); the access heat will be generated by the energy roof; PV cells from the energy roof power all climate installations; a total energy plan for the Science Business Centre; a systematic approach for exchanging

  6. Scaling Qualitative Probability

    OpenAIRE

    Burgin, Mark

    2017-01-01

    There are different approaches to qualitative probability, which includes subjective probability. We developed a representation of qualitative probability based on relational systems, which allows modeling uncertainty by probability structures and is more coherent than existing approaches. This setting makes it possible proving that any comparative probability is induced by some probability structure (Theorem 2.1), that classical probability is a probability structure (Theorem 2.2) and that i...

  7. Towards improved local hybrid functionals by calibration of exchange-energy densities

    International Nuclear Information System (INIS)

    Arbuznikov, Alexei V.; Kaupp, Martin

    2014-01-01

    A new approach for the calibration of (semi-)local and exact exchange-energy densities in the context of local hybrid functionals is reported. The calibration functions are derived from only the electron density and its spatial derivatives, avoiding spatial derivatives of the exact-exchange energy density or other computationally unfavorable contributions. The calibration functions fulfill the seven more important out of nine known exact constraints. It is shown that calibration improves substantially the definition of a non-dynamical correlation energy term for generalized gradient approximation (GGA)-based local hybrids. Moreover, gauge artifacts in the potential-energy curves of noble-gas dimers may be corrected by calibration. The developed calibration functions are then evaluated for a large range of energy-related properties (atomization energies, reaction barriers, ionization potentials, electron affinities, and total atomic energies) of three sets of local hybrids, using a simple one-parameter local-mixing. The functionals are based on (a) local spin-density approximation (LSDA) or (b) Perdew-Burke-Ernzerhof (PBE) exchange and correlation, and on (c) Becke-88 (B88) exchange and Lee-Yang-Parr (LYP) correlation. While the uncalibrated GGA-based functionals usually provide very poor thermochemical data, calibration allows a dramatic improvement, accompanied by only a small deterioration of reaction barriers. In particular, an optimized BLYP-based local-hybrid functional has been found that is a substantial improvement over the underlying global hybrids, as well as over previously reported LSDA-based local hybrids. It is expected that the present calibration approach will pave the way towards new generations of more accurate hyper-GGA functionals based on a local mixing of exchange-energy densities

  8. Power probability density function control and performance assessment of a nuclear research reactor

    International Nuclear Information System (INIS)

    Abharian, Amir Esmaeili; Fadaei, Amir Hosein

    2014-01-01

    Highlights: • In this paper, the performance assessment of static PDF control system is discussed. • The reactor PDF model is set up based on the B-spline functions. • Acquaints of Nu, and Th-h. equations solve concurrently by reformed Hansen’s method. • A principle of performance assessment is put forward for the PDF of the NR control. - Abstract: One of the main issues in controlling a system is to keep track of the conditions of the system function. The performance condition of the system should be inspected continuously, to keep the system in reliable working condition. In this study, the nuclear reactor is considered as a complicated system and a principle of performance assessment is used for analyzing the performance of the power probability density function (PDF) of the nuclear research reactor control. First, the model of the power PDF is set up, then the controller is designed to make the power PDF for tracing the given shape, that make the reactor to be a closed-loop system. The operating data of the closed-loop reactor are used to assess the control performance with the performance assessment criteria. The modeling, controller design and the performance assessment of the power PDF are all applied to the control of Tehran Research Reactor (TRR) power in a nuclear process. In this paper, the performance assessment of the static PDF control system is discussed, the efficacy and efficiency of the proposed method are investigated, and finally its reliability is proven

  9. Does charge transfer correlate with ignition probability?

    International Nuclear Information System (INIS)

    Holdstock, Paul

    2008-01-01

    Flammable or explosive atmospheres exist in many industrial environments. The risk of ignition caused by electrostatic discharges is very real and there has been extensive study of the incendiary nature of sparks and brush discharges. It is clear that in order to ignite a gas, an amount of energy needs to be delivered to a certain volume of gas within a comparatively short time. It is difficult to measure the energy released in an electrostatic discharge directly, but it is possible to approximate the energy in a spark generated from a well defined electrical circuit. The spark energy required to ignite a gas, vapour or dust cloud can be determined by passing such sparks through them. There is a relationship between energy and charge in a capacitive circuit and so it is possible to predict whether or not a spark discharge will cause an ignition by measuring the charge transferred in the spark. Brush discharges are in many ways less well defined than sparks. Nevertheless, some work has been done that has established a relationship between charge transferred in brush discharges and the probability of igniting a flammable atmosphere. The question posed by this paper concerns whether such a relationship holds true in all circumstances and if there is a universal correlation between charge transfer and ignition probability. Data is presented on discharges from textile materials that go some way to answering this question.

  10. {sup 134}Cs emission probabilities determination by gamma spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, M.C.M. de, E-mail: candida@cnen.gov.br [Comissão Nacional de Energia Nuclear (DINOR/CNEN), Riode Janeiro, RJ (Brazil); Poledna, R.; Delgado, J.U.; Silva, R.L.; Araujo, M.T.; Silva, C.J. da [Instituto de Radioproteção e Dosimetria (LNMRI/IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2017-07-01

    The National Laboratory for Ionizing Radiation Metrology (LNMRI/IRD/CNEN) of Rio de Janeiro performed primary and secondary standardization of different radionuclides reaching satisfactory uncertainties. A solution of {sup 134}Cs radionuclide was purchased from commercial supplier to emission probabilities determination of some of its energies. {sup 134}Cs is a beta gamma emitter with 754 days of half-life. This radionuclide is used as standard in environmental, water and food control. It is also important to germanium detector calibration.The gamma emission probabilities (Pγ) were determined mainly for some energies of the {sup 134}Cs by efficiency curve method and the Pγ absolute uncertainties obtained were below 1% (k=1). (author)

  11. Finite-size scaling of survival probability in branching processes

    OpenAIRE

    Garcia-Millan, Rosalba; Font-Clos, Francesc; Corral, Alvaro

    2014-01-01

    Branching processes pervade many models in statistical physics. We investigate the survival probability of a Galton-Watson branching process after a finite number of generations. We reveal the finite-size scaling law of the survival probability for a given branching process ruled by a probability distribution of the number of offspring per element whose standard deviation is finite, obtaining the exact scaling function as well as the critical exponents. Our findings prove the universal behavi...

  12. Estimation of probability density functions of damage parameter for valve leakage detection in reciprocating pump used in nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Kyeom; Kim, Tae Yun; Kim, Hyun Su; Chai, Jang Bom; Lee, Jin Woo [Div. of Mechanical Engineering, Ajou University, Suwon (Korea, Republic of)

    2016-10-15

    This paper presents an advanced estimation method for obtaining the probability density functions of a damage parameter for valve leakage detection in a reciprocating pump. The estimation method is based on a comparison of model data which are simulated by using a mathematical model, and experimental data which are measured on the inside and outside of the reciprocating pump in operation. The mathematical model, which is simplified and extended on the basis of previous models, describes not only the normal state of the pump, but also its abnormal state caused by valve leakage. The pressure in the cylinder is expressed as a function of the crankshaft angle, and an additional volume flow rate due to the valve leakage is quantified by a damage parameter in the mathematical model. The change in the cylinder pressure profiles due to the suction valve leakage is noticeable in the compression and expansion modes of the pump. The damage parameter value over 300 cycles is calculated in two ways, considering advance or delay in the opening and closing angles of the discharge valves. The probability density functions of the damage parameter are compared for diagnosis and prognosis on the basis of the probabilistic features of valve leakage.

  13. Estimation of probability density functions of damage parameter for valve leakage detection in reciprocating pump used in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Jong Kyeom; Kim, Tae Yun; Kim, Hyun Su; Chai, Jang Bom; Lee, Jin Woo

    2016-01-01

    This paper presents an advanced estimation method for obtaining the probability density functions of a damage parameter for valve leakage detection in a reciprocating pump. The estimation method is based on a comparison of model data which are simulated by using a mathematical model, and experimental data which are measured on the inside and outside of the reciprocating pump in operation. The mathematical model, which is simplified and extended on the basis of previous models, describes not only the normal state of the pump, but also its abnormal state caused by valve leakage. The pressure in the cylinder is expressed as a function of the crankshaft angle, and an additional volume flow rate due to the valve leakage is quantified by a damage parameter in the mathematical model. The change in the cylinder pressure profiles due to the suction valve leakage is noticeable in the compression and expansion modes of the pump. The damage parameter value over 300 cycles is calculated in two ways, considering advance or delay in the opening and closing angles of the discharge valves. The probability density functions of the damage parameter are compared for diagnosis and prognosis on the basis of the probabilistic features of valve leakage

  14. Estimation of Probability Density Functions of Damage Parameter for Valve Leakage Detection in Reciprocating Pump Used in Nuclear Power Plants

    Directory of Open Access Journals (Sweden)

    Jong Kyeom Lee

    2016-10-01

    Full Text Available This paper presents an advanced estimation method for obtaining the probability density functions of a damage parameter for valve leakage detection in a reciprocating pump. The estimation method is based on a comparison of model data which are simulated by using a mathematical model, and experimental data which are measured on the inside and outside of the reciprocating pump in operation. The mathematical model, which is simplified and extended on the basis of previous models, describes not only the normal state of the pump, but also its abnormal state caused by valve leakage. The pressure in the cylinder is expressed as a function of the crankshaft angle, and an additional volume flow rate due to the valve leakage is quantified by a damage parameter in the mathematical model. The change in the cylinder pressure profiles due to the suction valve leakage is noticeable in the compression and expansion modes of the pump. The damage parameter value over 300 cycles is calculated in two ways, considering advance or delay in the opening and closing angles of the discharge valves. The probability density functions of the damage parameter are compared for diagnosis and prognosis on the basis of the probabilistic features of valve leakage.

  15. Gluon saturation: Survival probability for leading neutrons in DIS

    International Nuclear Information System (INIS)

    Levin, Eugene; Tapia, Sebastian

    2012-01-01

    In this paper we discuss the example of one rapidity gap process: the inclusive cross sections of the leading neutrons in deep inelastic scattering with protons (DIS). The equations for this process are proposed and solved, giving the example of theoretical calculation of the survival probability for one rapidity gap processes. It turns out that the value of the survival probability is small and it decreases with energy.

  16. A tool for simulating collision probabilities of animals with marine renewable energy devices.

    Directory of Open Access Journals (Sweden)

    Pál Schmitt

    Full Text Available The mathematical problem of establishing a collision probability distribution is often not trivial. The shape and motion of the animal as well as of the the device must be evaluated in a four-dimensional space (3D motion over time. Earlier work on wind and tidal turbines was limited to a simplified two-dimensional representation, which cannot be applied to many new structures. We present a numerical algorithm to obtain such probability distributions using transient, three-dimensional numerical simulations. The method is demonstrated using a sub-surface tidal kite as an example. Necessary pre- and post-processing of the data created by the model is explained, numerical details and potential issues and limitations in the application of resulting probability distributions are highlighted.

  17. A tool for simulating collision probabilities of animals with marine renewable energy devices.

    Science.gov (United States)

    Schmitt, Pál; Culloch, Ross; Lieber, Lilian; Molander, Sverker; Hammar, Linus; Kregting, Louise

    2017-01-01

    The mathematical problem of establishing a collision probability distribution is often not trivial. The shape and motion of the animal as well as of the the device must be evaluated in a four-dimensional space (3D motion over time). Earlier work on wind and tidal turbines was limited to a simplified two-dimensional representation, which cannot be applied to many new structures. We present a numerical algorithm to obtain such probability distributions using transient, three-dimensional numerical simulations. The method is demonstrated using a sub-surface tidal kite as an example. Necessary pre- and post-processing of the data created by the model is explained, numerical details and potential issues and limitations in the application of resulting probability distributions are highlighted.

  18. Cellular Analysis of Boltzmann Most Probable Ideal Gas Statistics

    Science.gov (United States)

    Cahill, Michael E.

    2018-04-01

    Exact treatment of Boltzmann's Most Probable Statistics for an Ideal Gas of Identical Mass Particles having Translational Kinetic Energy gives a Distribution Law for Velocity Phase Space Cell j which relates the Particle Energy and the Particle Population according toB e(j) = A - Ψ(n(j) + 1)where A & B are the Lagrange Multipliers and Ψ is the Digamma Function defined byΨ(x + 1) = d/dx ln(x!)A useful sufficiently accurate approximation for Ψ is given byΨ(x +1) ≈ ln(e-γ + x)where γ is the Euler constant (≈.5772156649) & so the above distribution equation is approximatelyB e(j) = A - ln(e-γ + n(j))which can be inverted to solve for n(j) givingn(j) = (eB (eH - e(j)) - 1) e-γwhere B eH = A + γ& where B eH is a unitless particle energy which replaces the parameter A. The 2 approximate distribution equations imply that eH is the highest particle energy and the highest particle population isnH = (eB eH - 1) e-γwhich is due to the facts that population becomes negative if e(j) > eH and kinetic energy becomes negative if n(j) > nH.An explicit construction of Cells in Velocity Space which are equal in volume and homogeneous for almost all cells is shown to be useful in the analysis.Plots for sample distribution properties using e(j) as the independent variable are presented.

  19. Fishnet model for failure probability tail of nacre-like imbricated lamellar materials

    Science.gov (United States)

    Luo, Wen; Bažant, Zdeněk P.

    2017-12-01

    Nacre, the iridescent material of the shells of pearl oysters and abalone, consists mostly of aragonite (a form of CaCO3), a brittle constituent of relatively low strength (≈10 MPa). Yet it has astonishing mean tensile strength (≈150 MPa) and fracture energy (≈350 to 1,240 J/m2). The reasons have recently become well understood: (i) the nanoscale thickness (≈300 nm) of nacre's building blocks, the aragonite lamellae (or platelets), and (ii) the imbricated, or staggered, arrangement of these lamellea, bound by biopolymer layers only ≈25 nm thick, occupying engineering applications, however, the failure probability of ≤10-6 is generally required. To guarantee it, the type of probability density function (pdf) of strength, including its tail, must be determined. This objective, not pursued previously, is hardly achievable by experiments alone, since >10^8 tests of specimens would be needed. Here we outline a statistical model of strength that resembles a fishnet pulled diagonally, captures the tail of pdf of strength and, importantly, allows analytical safety assessments of nacreous materials. The analysis shows that, in terms of safety, the imbricated lamellar structure provides a major additional advantage—˜10% strength increase at tail failure probability 10^-6 and a 1 to 2 orders of magnitude tail probability decrease at fixed stress. Another advantage is that a high scatter of microstructure properties diminishes the strength difference between the mean and the probability tail, compared with the weakest link model. These advantages of nacre-like materials are here justified analytically and supported by millions of Monte Carlo simulations.

  20. Functional Modeling of Perspectives on the Example of Electric Energy Systems

    DEFF Research Database (Denmark)

    Heussen, Kai

    2009-01-01

    The integration of energy systems is a proven approach to gain higher overall energy efficiency. Invariably, this integration will come with increasing technical complexity through the diversification of energy resources and their functionality. With the integration of more fluctuating renewable ...... which enables a reflection on system integration requirements independent of particular technologies. The results are illustrated on examples related to electric energy systems.......The integration of energy systems is a proven approach to gain higher overall energy efficiency. Invariably, this integration will come with increasing technical complexity through the diversification of energy resources and their functionality. With the integration of more fluctuating renewable...... energies higher system flexibility will also be necessary. One of the challenges ahead is the design of control architecture to enable the flexibility and to handle the diversity. This paper presents an approach to model heterogeneous energy systems and their control on the basis of purpose and functions...

  1. Sensitivity analysis of limit state functions for probability-based plastic design

    Science.gov (United States)

    Frangopol, D. M.

    1984-01-01

    The evaluation of the total probability of a plastic collapse failure P sub f for a highly redundant structure of random interdependent plastic moments acted on by random interdepedent loads is a difficult and computationally very costly process. The evaluation of reasonable bounds to this probability requires the use of second moment algebra which involves man statistical parameters. A computer program which selects the best strategy for minimizing the interval between upper and lower bounds of P sub f is now in its final stage of development. The relative importance of various uncertainties involved in the computational process on the resulting bounds of P sub f, sensitivity is analyzed. Response sensitivities for both mode and system reliability of an ideal plastic portal frame are shown.

  2. Nonlocal exchange and kinetic-energy density functionals for electronic systems

    International Nuclear Information System (INIS)

    Glossman, M.D.; Rubio, A.; Balbas, L.C.; Alonso, J.A.

    1992-01-01

    The nonlocal weighted density approximation (WDA) to the exchange and kinetic-energy functionals of many electron systems proposed several years ago by Alonso and Girifalco is used to compute, within the framework of density functional theory, the ground-state electronic density and total energy of noble gas atoms and of neutral jellium-like sodium clusters containing up to 500 atoms. These results are compared with analogous calculations using the well known Thomas-Fermi-Weizsacker-Dirac (TFWD) approximations for the kinetic (TFW) and exchange (D) energy density functionals. An outstanding improvement of the total and exchange energies, of the density at the nucleus and of the expectation values is obtained for atoms within the WDA scheme. For sodium clusters the authors notice a sizeable contribution of the nonlocal effects to the total energy and to the density profiles. In the limit of very large clusters these effects should affect the surface energy of the bulk metal

  3. Diffraction at collider energies

    International Nuclear Information System (INIS)

    Frankfurt, L.L.

    1992-01-01

    Lessons with ''soft'' hadron physics to explain (a) feasibility to observe and to investigate color transparency, color opacity effects at colliders; (b) significant probability and specific features of hard diffractive processes; (c) feasibility to investigate components of parton wave functions of hadrons with minimal number of constituents. This new physics would be more important with increase of collision energy

  4. Expanded explorations into the optimization of an energy function for protein design

    Science.gov (United States)

    Huang, Yao-ming; Bystroff, Christopher

    2014-01-01

    Nature possesses a secret formula for the energy as a function of the structure of a protein. In protein design, approximations are made to both the structural representation of the molecule and to the form of the energy equation, such that the existence of a general energy function for proteins is by no means guaranteed. Here we present new insights towards the application of machine learning to the problem of finding a general energy function for protein design. Machine learning requires the definition of an objective function, which carries with it the implied definition of success in protein design. We explored four functions, consisting of two functional forms, each with two criteria for success. Optimization was carried out by a Monte Carlo search through the space of all variable parameters. Cross-validation of the optimized energy function against a test set gave significantly different results depending on the choice of objective function, pointing to relative correctness of the built-in assumptions. Novel energy cross-terms correct for the observed non-additivity of energy terms and an imbalance in the distribution of predicted amino acids. This paper expands on the work presented at ACM-BCB, Orlando FL , October 2012. PMID:24384706

  5. [A probability wave theory on the ion movement across cell membrane].

    Science.gov (United States)

    Zhang, Hui; Xu, Jiadong; Niu, Zhongqi

    2007-04-01

    The ionic quantity across the channel of the cell membrane decides the cell in a certain life state. The theory analysis that existed on the bio-effects of the electro-magnetic field (EMF) does not unveil the relationship between the EMF exerted on the cell and the ionic quantity across the cell membrane. Based on the cell construction, the existed theory analysis and the experimental results, an ionic probability wave theory is proposed in this paper to explain the biological window-effects of the electromagnetic wave. The theory regards the membrane channel as the periodic potential barrier and gives the physical view of the ion movement across cell-membrane. The theory revises the relationship between ion's energy in cell channel and the frequency exerted EMF. After the application of the concept of the wave function, the ionic probability across the cell membrane is given by the method of the quantum mechanics. The numerical results analyze the physical factors that influences the ion's movement across the cell membrane. These results show that the theory can explain the phenomenon of the biological window-effects.

  6. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    Science.gov (United States)

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  7. An accurate energy-range relationship for high-energy electron beams in arbitrary materials

    International Nuclear Information System (INIS)

    Sorcini, B.B.; Brahme, A.

    1994-01-01

    A general analytical energy-range relationship has been derived to relate the practical range, R p to the most probable energy, E p , of incident electron beams in the range 1 to 50 MeV and above, for absorbers of any atomic number. In the present study only Monte Carlo data determined with the new ITS.3 code have been employed. The standard deviations of the mean deviation from the Monte Carlo data at any energy are about 0.10, 0.12, 0.04, 0.11, 0.04, 0.03, 0.02 mm for Be, C, H 2 O, Al, Cu, Ag and U, respectively, and the relative standard deviation of the mean is about 0.5% for all materials. The fitting program gives some priority to water-equivalent materials, which explains the low standard deviation for water. A small error in the fall-off slope can give a different value for R p . We describe a new method which reduces the uncertainty in the R p determination, by fitting an odd function to the descending portion of the depth-dose curve in order to accurately determine the tangent at the inflection point, and thereby the practical range. An approximate inverse relation is given expressing the most probable energy of an electron beam as a function of the practical range. The resultant relative standard error of the energy is less than 0.7%, and the maximum energy error ΔE p is less than 0.3 MeV. (author)

  8. More efficient integrated safeguards by applying a reasonable detection probability for maintaining low presence probability of undetected nuclear proliferating activities

    International Nuclear Information System (INIS)

    Otsuka, Naoto

    2013-01-01

    Highlights: • A theoretical foundation is presented for more efficient Integrated Safeguards (IS). • Probability of undetected nuclear proliferation activities should be maintained low. • For nations under IS, the probability to start proliferation activities is very low. • The fact can decrease the detection probability of IS by dozens of percentage points. • The cost of IS per nation can be cut down by reducing inspection frequencies etc. - Abstract: A theoretical foundation is presented for implementing more efficiently the present International Atomic Energy Agency (IAEA) integrated safeguards (ISs) on the basis of fuzzy evaluation of the probability that the evaluated nation will continue peaceful activities. It is shown that by determining the presence probability of undetected nuclear proliferating activities, nations under IS can be maintained at acceptably low proliferation risk levels even if the detection probability of current IS is decreased by dozens of percentage from the present value. This makes it possible to reduce inspection frequency and the number of collected samples, allowing the IAEA to cut costs per nation. This will contribute to further promotion and application of IS to more nations by the IAEA, and more efficient utilization of IAEA resources from the viewpoint of whole IS framework

  9. Probability shapes perceptual precision: A study in orientation estimation.

    Science.gov (United States)

    Jabar, Syaheed B; Anderson, Britt

    2015-12-01

    Probability is known to affect perceptual estimations, but an understanding of mechanisms is lacking. Moving beyond binary classification tasks, we had naive participants report the orientation of briefly viewed gratings where we systematically manipulated contingent probability. Participants rapidly developed faster and more precise estimations for high-probability tilts. The shapes of their error distributions, as indexed by a kurtosis measure, also showed a distortion from Gaussian. This kurtosis metric was robust, capturing probability effects that were graded, contextual, and varying as a function of stimulus orientation. Our data can be understood as a probability-induced reduction in the variability or "shape" of estimation errors, as would be expected if probability affects the perceptual representations. As probability manipulations are an implicit component of many endogenous cuing paradigms, changes at the perceptual level could account for changes in performance that might have traditionally been ascribed to "attention." (c) 2015 APA, all rights reserved).

  10. Liver Function Status in some Nigerian Children with Protein Energy ...

    African Journals Online (AJOL)

    Objective: To ascertain functional status of the liver in Nigeria Children with Protein energy malnutrition. Materials and Methods: Liver function tests were performed on a total of 88 children with protein energy malnutrition (PEM). These were compared with 22 apparently well-nourished children who served as controls.

  11. Time dependent and asymptotic neutron number probability distribution calculation using discrete Fourier transform

    International Nuclear Information System (INIS)

    Humbert, Ph.

    2005-01-01

    In this paper we consider the probability distribution of neutrons in a multiplying assembly. The problem is studied using a space independent one group neutron point reactor model without delayed neutrons. We recall the generating function methodology and analytical results obtained by G.I. Bell when the c 2 approximation is used and we present numerical solutions in the general case, without this approximation. The neutron source induced distribution is calculated using the single initial neutron distribution which satisfies a master (Kolmogorov backward) equation. This equation is solved using the generating function method. The generating function satisfies a differential equation and the probability distribution is derived by inversion of the generating function. Numerical results are obtained using the same methodology where the generating function is the Fourier transform of the probability distribution. Discrete Fourier transforms are used to calculate the discrete time dependent distributions and continuous Fourier transforms are used to calculate the asymptotic continuous probability distributions. Numerical applications are presented to illustrate the method. (author)

  12. A note on the transition probability over Csup(*)-algebras

    International Nuclear Information System (INIS)

    Alberti, P.M.; Karl-Marx-Universitaet, Leipzig

    1983-01-01

    The algebraic structure of Uhlmann's transition probability between mixed states on unital Csup(*)-algebras is analyzed. Several improvements of methods to calculate the transition probability are fixed, examples are given (e.g., the case of quasi-local Csup(*)-algebras is dealt with) and two more functional characterizations are proved in general. (orig.)

  13. Comment on "Measurements without probabilities in the final state proposal"

    Science.gov (United States)

    Cohen, Eliahu; Nowakowski, Marcin

    2018-04-01

    The final state proposal [G. T. Horowitz and J. M. Maldacena, J. High Energy Phys. 04 (2004) 008, 10.1088/1126-6708/2004/04/008] is an attempt to relax the apparent tension between string theory and semiclassical arguments regarding the unitarity of black hole evaporation. Authors Bousso and Stanford [Phys. Rev. D 89, 044038 (2014), 10.1103/PhysRevD.89.044038] analyze thought experiments where an infalling observer first verifies the entanglement between early and late Hawking modes and then verifies the interior purification of the same Hawking particle. They claim that "probabilities for outcomes of these measurements are not defined" and therefore suggest that "the final state proposal does not offer a consistent alternative to the firewall hypothesis." We show, in contrast, that one may define all the relevant probabilities based on the so-called ABL rule [Y. Aharonov, P. G. Bergmann, and J. L. Lebowitz, Phys. Rev. 134, B1410 (1964), 10.1103/PhysRev.134.B1410], which is better suited for this task than the decoherence functional. We thus assert that the analysis of Bousso and Stanford cannot yet rule out the final state proposal.

  14. Inferring Parametric Energy Consumption Functions at Different Software Levels

    DEFF Research Database (Denmark)

    Liqat, Umer; Georgiou, Kyriakos; Kerrison, Steve

    2016-01-01

    The static estimation of the energy consumed by program executions is an important challenge, which has applications in program optimization and verification, and is instrumental in energy-aware software development. Our objective is to estimate such energy consumption in the form of functions...... on the input data sizes of programs. We have developed a tool for experimentation with static analysis which infers such energy functions at two levels, the instruction set architecture (ISA) and the intermediate code (LLVM IR) levels, and reflects it upwards to the higher source code level. This required...... the development of a translation from LLVM IR to an intermediate representation and its integration with existing components, a translation from ISA to the same representation, a resource analyzer, an ISA-level energy model, and a mapping from this model to LLVM IR. The approach has been applied to programs...

  15. Toward a generalized probability theory: conditional probabilities

    International Nuclear Information System (INIS)

    Cassinelli, G.

    1979-01-01

    The main mathematical object of interest in the quantum logic approach to the foundations of quantum mechanics is the orthomodular lattice and a set of probability measures, or states, defined by the lattice. This mathematical structure is studied per se, independently from the intuitive or physical motivation of its definition, as a generalized probability theory. It is thought that the building-up of such a probability theory could eventually throw light on the mathematical structure of Hilbert-space quantum mechanics as a particular concrete model of the generalized theory. (Auth.)

  16. A microcomputer program for energy assessment and aggregation using the triangular probability distribution

    Science.gov (United States)

    Crovelli, R.A.; Balay, R.H.

    1991-01-01

    A general risk-analysis method was developed for petroleum-resource assessment and other applications. The triangular probability distribution is used as a model with an analytic aggregation methodology based on probability theory rather than Monte-Carlo simulation. Among the advantages of the analytic method are its computational speed and flexibility, and the saving of time and cost on a microcomputer. The input into the model consists of a set of components (e.g. geologic provinces) and, for each component, three potential resource estimates: minimum, most likely (mode), and maximum. Assuming a triangular probability distribution, the mean, standard deviation, and seven fractiles (F100, F95, F75, F50, F25, F5, and F0) are computed for each component, where for example, the probability of more than F95 is equal to 0.95. The components are aggregated by combining the means, standard deviations, and respective fractiles under three possible siutations (1) perfect positive correlation, (2) complete independence, and (3) any degree of dependence between these two polar situations. A package of computer programs named the TRIAGG system was written in the Turbo Pascal 4.0 language for performing the analytic probabilistic methodology. The system consists of a program for processing triangular probability distribution assessments and aggregations, and a separate aggregation routine for aggregating aggregations. The user's documentation and program diskette of the TRIAGG system are available from USGS Open File Services. TRIAGG requires an IBM-PC/XT/AT compatible microcomputer with 256kbyte of main memory, MS-DOS 3.1 or later, either two diskette drives or a fixed disk, and a 132 column printer. A graphics adapter and color display are optional. ?? 1991.

  17. Joint probability distributions for a class of non-Markovian processes.

    Science.gov (United States)

    Baule, A; Friedrich, R

    2005-02-01

    We consider joint probability distributions for the class of coupled Langevin equations introduced by Fogedby [H. C. Fogedby, Phys. Rev. E 50, 1657 (1994)]. We generalize well-known results for the single-time probability distributions to the case of N -time joint probability distributions. It is shown that these probability distribution functions can be obtained by an integral transform from distributions of a Markovian process. The integral kernel obeys a partial differential equation with fractional time derivatives reflecting the non-Markovian character of the process.

  18. Economic modelling of energy services: Rectifying misspecified energy demand functions

    International Nuclear Information System (INIS)

    Hunt, Lester C.; Ryan, David L.

    2015-01-01

    estimation of an aggregate energy demand function for the UK with data over the period 1960–2011. - Highlights: • Introduces explicit modelling of demands for energy services • Derives estimable energy demand equations from energy service demands • Demonstrates the implicit misspecification with typical energy demand equations • Empirical implementation using aggregate and individual energy source data • Illustrative empirical example using UK data and energy efficiency modelling

  19. A new expression of the probability distribution in Incomplete Statistics and fundamental thermodynamic relations

    International Nuclear Information System (INIS)

    Huang Zhifu; Lin Bihong; ChenJincan

    2009-01-01

    In order to overcome the limitations of the original expression of the probability distribution appearing in literature of Incomplete Statistics, a new expression of the probability distribution is derived, where the Lagrange multiplier β introduced here is proved to be identical with that introduced in the second and third choices for the internal energy constraint in Tsallis' statistics and to be just equal to the physical inverse temperature. It is expounded that the probability distribution described by the new expression is invariant through uniform translation of the energy spectrum. Moreover, several fundamental thermodynamic relations are given and the relationship between the new and the original expressions of the probability distribution is discussed.

  20. Estimating the Probability of Wind Ramping Events: A Data-driven Approach

    OpenAIRE

    Wang, Cheng; Wei, Wei; Wang, Jianhui; Qiu, Feng

    2016-01-01

    This letter proposes a data-driven method for estimating the probability of wind ramping events without exploiting the exact probability distribution function (PDF) of wind power. Actual wind data validates the proposed method.

  1. The non-Gaussian joint probability density function of slope and elevation for a nonlinear gravity wave field. [in ocean surface

    Science.gov (United States)

    Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.

    1984-01-01

    On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.

  2. Electron energy-distribution functions in gases

    International Nuclear Information System (INIS)

    Pitchford, L.C.

    1981-01-01

    Numerical calculation of the electron energy distribution functions in the regime of drift tube experiments is discussed. The discussion is limited to constant applied fields and values of E/N (ratio of electric field strength to neutral density) low enough that electron growth due to ionization can be neglected

  3. The probability of a tornado missile hitting a target

    International Nuclear Information System (INIS)

    Goodman, J.; Koch, J.E.

    1983-01-01

    It is shown that tornado missile transportation is a diffusion Markovian process. Therefore, the Green's function method is applied for the estimation of the probability of hitting a unit target area. This propability is expressed through a joint density of tornado intensity and path area, a probability of tornado missile injection and a tornado missile height distribution. (orig.)

  4. Geometric modeling in probability and statistics

    CERN Document Server

    Calin, Ovidiu

    2014-01-01

    This book covers topics of Informational Geometry, a field which deals with the differential geometric study of the manifold probability density functions. This is a field that is increasingly attracting the interest of researchers from many different areas of science, including mathematics, statistics, geometry, computer science, signal processing, physics and neuroscience. It is the authors’ hope that the present book will be a valuable reference for researchers and graduate students in one of the aforementioned fields. This textbook is a unified presentation of differential geometry and probability theory, and constitutes a text for a course directed at graduate or advanced undergraduate students interested in applications of differential geometry in probability and statistics. The book contains over 100 proposed exercises meant to help students deepen their understanding, and it is accompanied by software that is able to provide numerical computations of several information geometric objects. The reader...

  5. Probability of detection - Comparative study of computed and film radiography for high-energy applications

    International Nuclear Information System (INIS)

    Venkatachalam, R.; Venugopal, M.; Prasad, T.

    2007-01-01

    Full text of publication follows: Suitability of computed radiography with Ir-192, Co-60 and up to 9 MeV x-rays for weld inspections is of importance to many heavy engineering and aerospace industries. CR is preferred because of lesser exposure and processing time as compared to film based radiography and also digital images offers other advantages such as image enhancements, quantitative measurements and easier archival. This paper describes systemic experimental approaches and image quality metrics to compare imaging performance of CR with film-based radiography. Experiments were designed using six-sigma methodology to validate performance of CR for steel thickness up to 160 mm with Ir- 192, Co-60 and x-ray energies varying from 100 kV up to 9 MeV. Weld specimens with defects such as lack of fusion, penetration, cracks, concavity, and porosities were studied for evaluating radiographic sensitivity and imaging performance of the system. Attempts were also made to quantify probability of detection using specimens with artificial and natural defects for various experimental conditions and were compared with film based systems. (authors)

  6. Probability of burn-through of defective 13 kA splices at increased energy levels

    CERN Document Server

    Verweij, A

    2011-01-01

    In many 13 kA splices in the machine there is a lack of bonding between the superconducting cable and the stabilising copper along with a bad contact between the bus stabiliser and the splice stabiliser. In case of a quench of such a defective splice, the current cannot bypass the cable through the copper, hence leading to excessive local heating of the cable. This may result in a thermal runaway and burn-through of the cable in a time smaller than the time constant of the circuit. Since it is not possible to protect against this fast thermal run-away, one has to limit the current to a level that is small enough so that a burn-through cannot occur. Prompt quenching of the joint, and quenching due to heat propagation through the bus and through the helium are considered. Probabilities for joint burn-through are given for the RB circuit for beam energies of 3.5, 4 and 4.5 TeV, and a decay time constant of the RB circuit of 50 and 68 s.

  7. Bayesian optimization for computationally extensive probability distributions.

    Science.gov (United States)

    Tamura, Ryo; Hukushima, Koji

    2018-01-01

    An efficient method for finding a better maximizer of computationally extensive probability distributions is proposed on the basis of a Bayesian optimization technique. A key idea of the proposed method is to use extreme values of acquisition functions by Gaussian processes for the next training phase, which should be located near a local maximum or a global maximum of the probability distribution. Our Bayesian optimization technique is applied to the posterior distribution in the effective physical model estimation, which is a computationally extensive probability distribution. Even when the number of sampling points on the posterior distributions is fixed to be small, the Bayesian optimization provides a better maximizer of the posterior distributions in comparison to those by the random search method, the steepest descent method, or the Monte Carlo method. Furthermore, the Bayesian optimization improves the results efficiently by combining the steepest descent method and thus it is a powerful tool to search for a better maximizer of computationally extensive probability distributions.

  8. Characteristic length of the knotting probability revisited

    International Nuclear Information System (INIS)

    Uehara, Erica; Deguchi, Tetsuo

    2015-01-01

    We present a self-avoiding polygon (SAP) model for circular DNA in which the radius of impermeable cylindrical segments corresponds to the screening length of double-stranded DNA surrounded by counter ions. For the model we evaluate the probability for a generated SAP with N segments having a given knot K through simulation. We call it the knotting probability of a knot K with N segments for the SAP model. We show that when N is large the most significant factor in the knotting probability is given by the exponentially decaying part exp(−N/N K ), where the estimates of parameter N K are consistent with the same value for all the different knots we investigated. We thus call it the characteristic length of the knotting probability. We give formulae expressing the characteristic length as a function of the cylindrical radius r ex , i.e. the screening length of double-stranded DNA. (paper)

  9. Probability Weighting as Evolutionary Second-best

    OpenAIRE

    Herold, Florian; Netzer, Nick

    2011-01-01

    The economic concept of the second-best involves the idea that multiple simultaneous deviations from a hypothetical first-best optimum may be optimal once the first-best itself can no longer be achieved, since one distortion may partially compensate for another. Within an evolutionary framework, we translate this concept to behavior under uncertainty. We argue that the two main components of prospect theory, the value function and the probability weighting function, are complements in the sec...

  10. Examining the association between male circumcision and sexual function: evidence from a British probability survey.

    Science.gov (United States)

    Homfray, Virginia; Tanton, Clare; Mitchell, Kirstin R; Miller, Robert F; Field, Nigel; Macdowall, Wendy; Wellings, Kaye; Sonnenberg, Pam; Johnson, Anne M; Mercer, Catherine H

    2015-07-17

    Despite biological advantages of male circumcision in reducing HIV/sexually transmitted infection acquisition, concern is often expressed that it may reduce sexual enjoyment and function. We examine the association between circumcision and sexual function among sexually active men in Britain using data from Britain's third National Survey of Sexual Attitudes and Lifestyles (Natsal-3). Natsal-3 asked about circumcision and included a validated measure of sexual function, the Natsal-SF, which takes into account not only sexual difficulties but also the relationship context and overall level of satisfaction. A stratified probability survey of 6293 men and 8869 women aged 16-74 years, resident in Britain, undertaken 2010-2012, using computer-assisted face-to-face interviewing with computer-assisted self-interview for the more sensitive questions. Logistic regression was used to calculate odds ratios (ORs) to examine the association between reporting male circumcision and aspects of sexual function among sexually active men (n = 4816). The prevalence of male circumcision in Britain was 20.7% [95% confidence interval (CI): 19.3-21.8]. There was no association between male circumcision and, being in the lowest quintile of scores for the Natsal-SF, an indicator of poorer sexual function (adjusted OR: 0.95, 95% CI: 0.76-1.18). Circumcised men were as likely as uncircumcised men to report the specific sexual difficulties asked about in Natsal-3, except that a larger proportion of circumcised men reported erectile difficulties. This association was of borderline statistical significance after adjusting for age and relationship status (adjusted OR: 1.27, 95% CI: 0.99-1.63). Data from a large, nationally representative British survey suggest that circumcision is not associated with men's overall sexual function at a population level.

  11. Time-averaged probability density functions of soot nanoparticles along the centerline of a piloted turbulent diffusion flame using a scanning mobility particle sizer

    KAUST Repository

    Chowdhury, Snehaunshu; Boyette, Wesley; Roberts, William L.

    2017-01-01

    In this study, we demonstrate the use of a scanning mobility particle sizer (SMPS) as an effective tool to measure the probability density functions (PDFs) of soot nanoparticles in turbulent flames. Time-averaged soot PDFs necessary for validating

  12. Damage energy functions for compounds and alloys

    International Nuclear Information System (INIS)

    Parkin, D.M.; Coulter, C.A.

    1977-01-01

    The concept of the damage energy of an energetic primary knock-on atom in a material is a central component in the procedure used to calculate dpa for metals exposed to neutron and charged particle radiation. Coefficients for analytic fits to the calculated damage energy functions are given for Al 2 O 3 , Si 3 N 4 , Y 2 O 3 , and NbTi. Damage efficiencies are given for Al 2 O 3

  13. STADIC: a computer code for combining probability distributions

    International Nuclear Information System (INIS)

    Cairns, J.J.; Fleming, K.N.

    1977-03-01

    The STADIC computer code uses a Monte Carlo simulation technique for combining probability distributions. The specific function for combination of the input distribution is defined by the user by introducing the appropriate FORTRAN statements to the appropriate subroutine. The code generates a Monte Carlo sampling from each of the input distributions and combines these according to the user-supplied function to provide, in essence, a random sampling of the combined distribution. When the desired number of samples is obtained, the output routine calculates the mean, standard deviation, and confidence limits for the resultant distribution. This method of combining probability distributions is particularly useful in cases where analytical approaches are either too difficult or undefined

  14. Measurement of low energy neutrino absorption probability in thallium 205

    International Nuclear Information System (INIS)

    Freedman, M.S.

    1986-01-01

    A major aspect of the P-P neutrino flux determination using thallium 205 is the very difficult problem of experimentally demonstrating the neutrino reaction cross section with about 10% accuracy. One will soon be able to completely strip the electrons from atomic thallium 205 and to maintain the bare nucleus in this state in the heavy storage ring to be built at GSI Darmstadt. This nucleus can decay by emitting a beta-minus particle into the bound K-level of the daughter lead 205 ion as the only energetically open decay channel, (plus, of course, an antineutrino). This single channel beta decay explores the same nuclear wave functions of initial and final states as does the neutrino capture in atomic thallium 205, and thus its probability or rate is governed by the same nuclear matrix elements that affect both weak interactions. Measuring the rate of accumulation of lead 205 ions in the circulating beam of thallium 205 ions gives directly the cross section of the neutrino capture reaction. The calculations of the expected rates under realistic experimental conditions will be shown to be very favorable for the measurement. A special calibration experiment to verify this method and check the theoretical calculations will be suggested. Finally, the neutrino cross section calculation based on the observed rate of the single channel beta-minus decay reaction will be shown. Demonstrating bound state beta decay may be the first verification of the theory of this very important process that influences beta decay rates of several isotopes in stellar interiors, e.g., Re-187, that play important roles in geologic and cosmologic dating and nucleosynthesis. 21 refs., 2 figs

  15. On Probability Leakage

    OpenAIRE

    Briggs, William M.

    2012-01-01

    The probability leakage of model M with respect to evidence E is defined. Probability leakage is a kind of model error. It occurs when M implies that events $y$, which are impossible given E, have positive probability. Leakage does not imply model falsification. Models with probability leakage cannot be calibrated empirically. Regression models, which are ubiquitous in statistical practice, often evince probability leakage.

  16. Quantum theory with an energy operator defined as a quartic form of the momentum

    Energy Technology Data Exchange (ETDEWEB)

    Bezák, Viktor, E-mail: bezak@fmph.uniba.sk

    2016-09-15

    Quantum theory of the non-harmonic oscillator defined by the energy operator proposed by Yurke and Buks (2006) is presented. Although these authors considered a specific problem related to a model of transmission lines in a Kerr medium, our ambition is not to discuss the physical substantiation of their model. Instead, we consider the problem from an abstract, logically deductive, viewpoint. Using the Yurke–Buks energy operator, we focus attention on the imaginary-time propagator. We derive it as a functional of the Mehler kernel and, alternatively, as an exact series involving Hermite polynomials. For a statistical ensemble of identical oscillators defined by the Yurke–Buks energy operator, we calculate the partition function, average energy, free energy and entropy. Using the diagonal element of the canonical density matrix of this ensemble in the coordinate representation, we define a probability density, which appears to be a deformed Gaussian distribution. A peculiarity of this probability density is that it may reveal, when plotted as a function of the position variable, a shape with two peaks located symmetrically with respect to the central point.

  17. Comparison of density estimators. [Estimation of probability density functions

    Energy Technology Data Exchange (ETDEWEB)

    Kao, S.; Monahan, J.F.

    1977-09-01

    Recent work in the field of probability density estimation has included the introduction of some new methods, such as the polynomial and spline methods and the nearest neighbor method, and the study of asymptotic properties in depth. This earlier work is summarized here. In addition, the computational complexity of the various algorithms is analyzed, as are some simulations. The object is to compare the performance of the various methods in small samples and their sensitivity to change in their parameters, and to attempt to discover at what point a sample is so small that density estimation can no longer be worthwhile. (RWR)

  18. Description of atomic burials in compact globular proteins by Fermi-Dirac probability distributions.

    Science.gov (United States)

    Gomes, Antonio L C; de Rezende, Júlia R; Pereira de Araújo, Antônio F; Shakhnovich, Eugene I

    2007-02-01

    We perform a statistical analysis of atomic distributions as a function of the distance R from the molecular geometrical center in a nonredundant set of compact globular proteins. The number of atoms increases quadratically for small R, indicating a constant average density inside the core, reaches a maximum at a size-dependent distance R(max), and falls rapidly for larger R. The empirical curves turn out to be consistent with the volume increase of spherical concentric solid shells and a Fermi-Dirac distribution in which the distance R plays the role of an effective atomic energy epsilon(R) = R. The effective chemical potential mu governing the distribution increases with the number of residues, reflecting the size of the protein globule, while the temperature parameter beta decreases. Interestingly, betamu is not as strongly dependent on protein size and appears to be tuned to maintain approximately half of the atoms in the high density interior and the other half in the exterior region of rapidly decreasing density. A normalized size-independent distribution was obtained for the atomic probability as a function of the reduced distance, r = R/R(g), where R(g) is the radius of gyration. The global normalized Fermi distribution, F(r), can be reasonably decomposed in Fermi-like subdistributions for different atomic types tau, F(tau)(r), with Sigma(tau)F(tau)(r) = F(r), which depend on two additional parameters mu(tau) and h(tau). The chemical potential mu(tau) affects a scaling prefactor and depends on the overall frequency of the corresponding atomic type, while the maximum position of the subdistribution is determined by h(tau), which appears in a type-dependent atomic effective energy, epsilon(tau)(r) = h(tau)r, and is strongly correlated to available hydrophobicity scales. Better adjustments are obtained when the effective energy is not assumed to be necessarily linear, or epsilon(tau)*(r) = h(tau)*r(alpha,), in which case a correlation with hydrophobicity

  19. Non-equilibrium random matrix theory. Transition probabilities

    International Nuclear Information System (INIS)

    Pedro, Francisco Gil; Westphal, Alexander

    2016-06-01

    In this letter we present an analytic method for calculating the transition probability between two random Gaussian matrices with given eigenvalue spectra in the context of Dyson Brownian motion. We show that in the Coulomb gas language, in large N limit, memory of the initial state is preserved in the form of a universal linear potential acting on the eigenvalues. We compute the likelihood of any given transition as a function of time, showing that as memory of the initial state is lost, transition probabilities converge to those of the static ensemble.

  20. Non-equilibrium random matrix theory. Transition probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Pedro, Francisco Gil [Univ. Autonoma de Madrid (Spain). Dept. de Fisica Teorica; Westphal, Alexander [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Gruppe Theorie

    2016-06-15

    In this letter we present an analytic method for calculating the transition probability between two random Gaussian matrices with given eigenvalue spectra in the context of Dyson Brownian motion. We show that in the Coulomb gas language, in large N limit, memory of the initial state is preserved in the form of a universal linear potential acting on the eigenvalues. We compute the likelihood of any given transition as a function of time, showing that as memory of the initial state is lost, transition probabilities converge to those of the static ensemble.

  1. Boundary conditions of the exact impulse wave function

    International Nuclear Information System (INIS)

    Gravielle, M.; Miraglia, J.E.

    1997-01-01

    The behavior of the exact impulse wave function is investigated at intermediate and high impact energies. Numerical details of the wave function and its perturbative potential are reported. We conclude that the impulse wave function does not tend to the proper Coulomb asymptotic limit. For electron capture, however, it is shown that the impulse wave function produces reliable probabilities even for intermediate velocities and symmetric collision systems. copyright 1997 The American Physical Society

  2. On Hybrid Energy Utilization in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Mohammad Tala’t

    2017-11-01

    Full Text Available In a wireless sensor network (WSN, many applications have limited energy resources for data transmission. In order to accomplish a better green communication for WSN, a hybrid energy scheme can supply a more reliable energy source. In this article, hybrid energy utilization—which consists of constant energy source and solar harvested energy—is considered for WSN. To minimize constant energy usage from the hybrid source, a Markov decision process (MDP is designed to find the optimal transmission policy. With a finite packet buffer and a finite battery size, an MDP model is presented to define the states, actions, state transition probabilities, and the cost function including the cost values for all actions. A weighted sum of constant energy source consumption and a packet dropping probability (PDP are adopted as the cost value, enabling us to find the optimal solution for balancing the minimization of the constant energy source utilization and the PDP using a value iteration algorithm. As shown in the simulation results, the performance of optimal solution using MDP achieves a significant improvement compared to solution without its use.

  3. Harris functional and related methods for calculating total energies in density-functional theory

    International Nuclear Information System (INIS)

    Averill, F.W.; Painter, G.S.

    1990-01-01

    The simplified energy functional of Harris has given results of useful accuracy for systems well outside the limits of weakly interacting fragments for which the method was originally proposed. In the present study, we discuss the source of the frequent good agreement of the Harris energy with full Kohn-Sham self-consistent results. A procedure is described for extending the applicability of the scheme to more strongly interacting systems by going beyond the frozen-atom fragment approximation. A gradient-force expression is derived, based on the Harris functional, which accounts for errors in the fragment charge representation. Results are presented for some diatomic molecules, illustrating the points of this study

  4. Range-separated density-functional theory for molecular excitation energies

    International Nuclear Information System (INIS)

    Rebolini, E.

    2014-01-01

    Linear-response time-dependent density-functional theory (TDDFT) is nowadays a method of choice to compute molecular excitation energies. However, within the usual adiabatic semi-local approximations, it is not able to describe properly Rydberg, charge-transfer or multiple excitations. Range separation of the electronic interaction allows one to mix rigorously density-functional methods at short range and wave function or Green's function methods at long range. When applied to the exchange functional, it already corrects most of these deficiencies but multiple excitations remain absent as they need a frequency-dependent kernel. In this thesis, the effects of range separation are first assessed on the excitation energies of a partially-interacting system in an analytic and numerical study in order to provide guidelines for future developments of range-separated methods for excitation energy calculations. It is then applied on the exchange and correlation TDDFT kernels in a single-determinant approximation in which the long-range part of the correlation kernel vanishes. A long-range frequency-dependent second-order correlation kernel is then derived from the Bethe-Salpeter equation and added perturbatively to the range-separated TDDFT kernel in order to take into account the effects of double excitations. (author)

  5. Probability and statistics in particle physics

    International Nuclear Information System (INIS)

    Frodesen, A.G.; Skjeggestad, O.

    1979-01-01

    Probability theory is entered into at an elementary level and given a simple and detailed exposition. The material on statistics has been organised with an eye to the experimental physicist's practical need, which is likely to be statistical methods for estimation or decision-making. The book is intended for graduate students and research workers in experimental high energy and elementary particle physics, and numerous examples from these fields are presented. (JIW)

  6. Conserving relativistic many-body approach: Equation of state, spectral function, and occupation probabilities of nuclear matter

    International Nuclear Information System (INIS)

    de Jong, F.; Malfliet, R.

    1991-01-01

    Starting from a relativistic Lagrangian we derive a ''conserving'' approximation for the description of nuclear matter. We show this to be a nontrivial extension over the relativistic Dirac-Brueckner scheme. The saturation point of the equation of state calculated agrees very well with the empirical saturation point. The conserving character of the approach is tested by means of the Hugenholtz--van Hove theorem. We find the theorem fulfilled very well around saturation. A new value for compression modulus is derived, K=310 MeV. Also we calculate the occupation probabilities at normal nuclear matter densities by means of the spectral function. The average depletion κ of the Fermi sea is found to be κ∼0.11

  7. The effect of work function changes on secondary ion energy spectra

    International Nuclear Information System (INIS)

    Wittmaack, K.

    1983-01-01

    The effect of work function changes on experimental secondary ion energy spectra is discussed. In agreement with theory the measured ion intensities frequently exhibit an exponential work function dependence. However, the predicted velocity dependence is only observed at fairly high secondary ion energies. In the absence of a velocity dependence of the degree of ionization measured shifts of energy spectra reflect work function changes directly. Various instrumental problems are shown to aggravate a detailed comparison between experiment and theory. Significant artefacts must be expected if the extraction field is of the order of or less than the lateral field induced by a work function difference between the bombarded spot and the surrounding sample surface. (Auth.)

  8. Improved method for estimating particle scattering probabilities to finite detectors for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Mickael, M.; Gardner, R.P.; Verghese, K.

    1988-01-01

    An improved method for calculating the total probability of particle scattering within the solid angle subtended by finite detectors is developed, presented, and tested. The limiting polar and azimuthal angles subtended by the detector are measured from the direction that most simplifies their calculation rather than from the incident particle direction. A transformation of the particle scattering probability distribution function (pdf) is made to match the transformation of the direction from which the limiting angles are measured. The particle scattering probability to the detector is estimated by evaluating the integral of the transformed pdf over the range of the limiting angles measured from the preferred direction. A general formula for transforming the particle scattering pdf is derived from basic principles and applied to four important scattering pdf's; namely, isotropic scattering in the Lab system, isotropic neutron scattering in the center-of-mass system, thermal neutron scattering by the free gas model, and gamma-ray Klein-Nishina scattering. Some approximations have been made to these pdf's to enable analytical evaluations of the final integrals. These approximations are shown to be valid over a wide range of energies and for most elements. The particle scattering probability to spherical, planar circular, and right circular cylindrical detectors has been calculated using the new and previously reported direct approach. Results indicate that the new approach is valid and is computationally faster by orders of magnitude

  9. Alternative definitions of the frozen energy in energy decomposition analysis of density functional theory calculations.

    Science.gov (United States)

    Horn, Paul R; Head-Gordon, Martin

    2016-02-28

    In energy decomposition analysis (EDA) of intermolecular interactions calculated via density functional theory, the initial supersystem wavefunction defines the so-called "frozen energy" including contributions such as permanent electrostatics, steric repulsions, and dispersion. This work explores the consequences of the choices that must be made to define the frozen energy. The critical choice is whether the energy should be minimized subject to the constraint of fixed density. Numerical results for Ne2, (H2O)2, BH3-NH3, and ethane dissociation show that there can be a large energy lowering associated with constant density orbital relaxation. By far the most important contribution is constant density inter-fragment relaxation, corresponding to charge transfer (CT). This is unwanted in an EDA that attempts to separate CT effects, but it may be useful in other contexts such as force field development. An algorithm is presented for minimizing single determinant energies at constant density both with and without CT by employing a penalty function that approximately enforces the density constraint.

  10. Dependence of the giant dipole strength function on excitation energy

    International Nuclear Information System (INIS)

    Draper, J.E.; Newton, J.O.; Sobotka, L.G.; Lindenberger, H.; Wozniak, G.J.; Moretto, L.G.; Stephens, F.S.; Diamond, R.M.; McDonald, R.J.

    1982-01-01

    Spectra of γ rays associated with deep-inelastic products from the 1150-MeV 136 Xe+ 181 Ta reaction have been measured. The yield of 10--20-MeV γ rays initially increases rapidly with the excitation energy of the products and then more slowly for excitation energies in excess of 120 MeV. Statistical-model calculations with ground-state values of the giant dipole strength function fail to reproduce the shape of the measured γ-ray spectra. This suggests a dependence of the giant dipole strength function on excitation energy

  11. Energy Vulnerability and EU-Russia Energy Relations

    Directory of Open Access Journals (Sweden)

    Edward Hunter Christie

    2009-08-01

    Full Text Available The concept of energy vulnerability is reviewed and discussed with a focus on Russia’s foreign energy relations, in particular those with European countries. A definition and a conceptual framework for quantifying energy vulnerability are proposed in the context of a review of recent research on energy vulnerability indices. In particular it is suggested that source country diversification should be reflected using the expected shortfall measure used in financial economics, rather than the Herfindahl-Hirschman or Shannon-Wiener indices, and that the former should then enter a calibrated function in order to yield expected economic loss. The issues of asymmetric failure probabilities and accidental versus intentional supply disruptions are then discussed with examples of recent Russian actions. Energy vulnerability measurement and modelling should ultimately inform policy. In particular, member states should legislate that no energy infrastructure project by one or more member states may increase the energy vulnerability of another member state. Additionally, European environmental policies, notably the EU ETS, should be amended so as to account for induced changes in energy vulnerability. Finally, member states should increase the level of transparency and disclosure with respect to gas import statistics and gas supply contracts.

  12. Evaluation of the Effects of Different Energy Drinks and Coffee on Endothelial Function.

    Science.gov (United States)

    Molnar, Janos; Somberg, John C

    2015-11-01

    Endothelial function plays an important role in circulatory physiology. There has been differing reports on the effect of energy drink on endothelial function. We set out to evaluate the effect of 3 energy drinks and coffee on endothelial function. Endothelial function was evaluated in healthy volunteers using a device that uses digital peripheral arterial tonometry measuring endothelial function as the reactive hyperemia index (RHI). Six volunteers (25 ± 7 years) received energy drink in a random order at least 2 days apart. Drinks studied were 250 ml "Red Bull" containing 80 mg caffeine, 57 ml "5-hour Energy" containing 230 mg caffeine, and a can of 355 ml "NOS" energy drink containing 120 mg caffeine. Sixteen volunteers (25 ± 5 years) received a cup of 473 ml coffee containing 240 mg caffeine. Studies were performed before drink (baseline) at 1.5 and 4 hours after drink. Two of the energy drinks (Red Bull and 5-hour Energy) significantly improved endothelial function at 4 hours after drink, whereas 1 energy drink (NOS) and coffee did not change endothelial function significantly. RHI increased by 82 ± 129% (p = 0.028) and 63 ± 37% (p = 0.027) after 5-hour Energy and Red Bull, respectively. The RHI changed after NOS by 2 ± 30% (p = 1.000) and by 7 ± 30% (p = 1.000) after coffee. In conclusion, some energy drinks appear to significantly improve endothelial function. Caffeine does not appear to be the component responsible for these differences. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Quantum probabilities as Dempster-Shafer probabilities in the lattice of subspaces

    International Nuclear Information System (INIS)

    Vourdas, A.

    2014-01-01

    The orthocomplemented modular lattice of subspaces L[H(d)], of a quantum system with d-dimensional Hilbert space H(d), is considered. A generalized additivity relation which holds for Kolmogorov probabilities is violated by quantum probabilities in the full lattice L[H(d)] (it is only valid within the Boolean subalgebras of L[H(d)]). This suggests the use of more general (than Kolmogorov) probability theories, and here the Dempster-Shafer probability theory is adopted. An operator D(H 1 ,H 2 ), which quantifies deviations from Kolmogorov probability theory is introduced, and it is shown to be intimately related to the commutator of the projectors P(H 1 ),P(H 2 ), to the subspaces H 1 , H 2 . As an application, it is shown that the proof of the inequalities of Clauser, Horne, Shimony, and Holt for a system of two spin 1/2 particles is valid for Kolmogorov probabilities, but it is not valid for Dempster-Shafer probabilities. The violation of these inequalities in experiments supports the interpretation of quantum probabilities as Dempster-Shafer probabilities

  14. Transition Dipole Moments and Transition Probabilities of the CN Radical

    Science.gov (United States)

    Yin, Yuan; Shi, Deheng; Sun, Jinfeng; Zhu, Zunlue

    2018-04-01

    This paper studies the transition probabilities of electric dipole transitions between 10 low-lying states of the CN radical. These states are X2Σ+, A2Π, B2Σ+, a4Σ+, b4Π, 14Σ‑, 24Π, 14Δ, 16Σ+, and 16Π. The potential energy curves are calculated using the CASSCF method, which is followed by the icMRCI approach with the Davidson correction. The transition dipole moments between different states are calculated. To improve the accuracy of potential energy curves, core–valence correlation and scalar relativistic corrections, as well as the extrapolation of potential energies to the complete basis set limit are included. The Franck–Condon factors and Einstein coefficients of emissions are calculated. The radiative lifetimes are determined for the vibrational levels of the A2Π, B2Σ+, b4Π, 14Σ‑, 24Π, 14Δ, and 16Π states. According to the transition probabilities and radiative lifetimes, some guidelines for detecting these states spectroscopically are proposed. The spin–orbit coupling effect on the spectroscopic and vibrational properties is evaluated. The splitting energy in the A2Π state is determined to be 50.99 cm‑1, which compares well with the experimental ones. The potential energy curves, transition dipole moments, spectroscopic parameters, and transition probabilities reported in this paper can be considered to be very reliable. The results obtained here can be used as guidelines for detecting these transitions, in particular those that have not been measured in previous experiments or have not been observed in the Sun, comets, stellar atmospheres, dark interstellar clouds, and diffuse interstellar clouds.

  15. Energy harvesting with functional materials and microsystems

    CERN Document Server

    Bhaskaran, Madhu; Iniewski, Krzysztof

    2013-01-01

    For decades, people have searched for ways to harvest energy from natural sources. Lately, a desire to address the issue of global warming and climate change has popularized solar or photovoltaic technology, while piezoelectric technology is being developed to power handheld devices without batteries, and thermoelectric technology is being explored to convert wasted heat, such as in automobile engine combustion, into electricity. Featuring contributions from international researchers in both academics and industry, Energy Harvesting with Functional Materials and Microsystems explains the growi

  16. PROBABILITY SURVEYS , CONDITIONAL PROBABILITIES AND ECOLOGICAL RISK ASSESSMENT

    Science.gov (United States)

    We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...

  17. A Cellular Perspective on Brain Energy Metabolism and Functional Imaging

    KAUST Repository

    Magistretti, Pierre J.

    2015-05-01

    The energy demands of the brain are high: they account for at least 20% of the body\\'s energy consumption. Evolutionary studies indicate that the emergence of higher cognitive functions in humans is associated with an increased glucose utilization and expression of energy metabolism genes. Functional brain imaging techniques such as fMRI and PET, which are widely used in human neuroscience studies, detect signals that monitor energy delivery and use in register with neuronal activity. Recent technological advances in metabolic studies with cellular resolution have afforded decisive insights into the understanding of the cellular and molecular bases of the coupling between neuronal activity and energy metabolism and pointat a key role of neuron-astrocyte metabolic interactions. This article reviews some of the most salient features emerging from recent studies and aims at providing an integration of brain energy metabolism across resolution scales. © 2015 Elsevier Inc.

  18. Voltage dependency of transmission probability of aperiodic DNA molecule

    Science.gov (United States)

    Wiliyanti, V.; Yudiarsah, E.

    2017-07-01

    Characteristics of electron transports in aperiodic DNA molecules have been studied. Double stranded DNA model with the sequences of bases, GCTAGTACGTGACGTAGCTAGGATATGCCTGA, in one chain and its complements on the other chains has been used. Tight binding Hamiltonian is used to model DNA molecules. In the model, we consider that on-site energy of the basis has a linearly dependency on the applied electric field. Slater-Koster scheme is used to model electron hopping constant between bases. The transmission probability of electron from one electrode to the next electrode is calculated using a transfer matrix technique and scattering matrix method simultaneously. The results show that, generally, higher voltage gives a slightly larger value of the transmission probability. The applied voltage seems to shift extended states to lower energy. Meanwhile, the value of the transmission increases with twisting motion frequency increment.

  19. Evaluating the suitability of wind speed probability distribution models: A case of study of east and southeast parts of Iran

    International Nuclear Information System (INIS)

    Alavi, Omid; Mohammadi, Kasra; Mostafaeipour, Ali

    2016-01-01

    Highlights: • Suitability of different wind speed probability functions is assessed. • 5 stations distributed in east and south-east of Iran are considered as case studies. • Nakagami distribution is tested for first time and compared with 7 other functions. • Due to difference in wind features, best function is not similar for all stations. - Abstract: Precise information of wind speed probability distribution is truly significant for many wind energy applications. The objective of this study is to evaluate the suitability of different probability functions for estimating wind speed distribution at five stations, distributed in the east and southeast of Iran. Nakagami distribution function is utilized for the first time to estimate the distribution of wind speed. The performance of Nakagami function is compared with seven typically used distribution functions. The achieved results reveal that the more effective function is not similar among all stations. Wind speed characteristics, quantity and quality of the recorded wind speed data can be considered as influential parameters on the performance of the distribution functions. Also, the skewness of the recorded wind speed data may have influence on the accuracy of the Nakagami distribution. For Chabahar and Khaf stations the Nakagami distribution shows the highest performance while for Lutak, Rafsanjan and Zabol stations the Gamma, Generalized Extreme Value and Inverse-Gaussian distributions offer the best fits, respectively. Based on the analysis, the Nakagami distribution can generally be considered as an effective distribution since it provides the best fits in 2 stations and ranks 3rd to 5th in the remaining stations; however, due to the close performance of the Nakagami and Weibull distributions and also flexibility of the Weibull function as its widely proven feature, more assessments on the performance of the Nakagami distribution are required.

  20. Neuroenergetics: How energy constraints shape brain function

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The nervous system consumes a disproportionate fraction of the resting body’s energy production. In humans, the brain represents 2% of the body’s mass, yet it accounts for ~20% of the total oxygen consumption. Expansion in the size of the brain relative to the body and an increase in the number of connections between neurons during evolution underpin our cognitive powers and are responsible for our brains’ high metabolic rate. The molecules at the center of cellular energy metabolism also act as intercellular signals and constitute an important communication pathway, coordinating for instance the immune surveillance of the brain. Despite the significance of energy consumption in the nervous system, how energy constrains and shapes brain function is often under appreciated. I will illustrate the importance of brain energetics and metabolism with two examples from my recent work. First, I will show how the brain trades information for energy savings in the visual pathway. Indeed, a significant fraction ...

  1. Functional data analysis of sleeping energy expenditure

    Science.gov (United States)

    Adequate sleep is crucial during childhood for metabolic health, and physical and cognitive development. Inadequate sleep can disrupt metabolic homeostasis and alter sleeping energy expenditure (SEE). Functional data analysis methods were applied to SEE data to elucidate the population structure of ...

  2. Representation of Probability Density Functions from Orbit Determination using the Particle Filter

    Science.gov (United States)

    Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell

    2012-01-01

    Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.

  3. Minimal nuclear energy density functional

    Science.gov (United States)

    Bulgac, Aurel; Forbes, Michael McNeil; Jin, Shi; Perez, Rodrigo Navarro; Schunck, Nicolas

    2018-04-01

    We present a minimal nuclear energy density functional (NEDF) called "SeaLL1" that has the smallest number of possible phenomenological parameters to date. SeaLL1 is defined by seven significant phenomenological parameters, each related to a specific nuclear property. It describes the nuclear masses of even-even nuclei with a mean energy error of 0.97 MeV and a standard deviation of 1.46 MeV , two-neutron and two-proton separation energies with rms errors of 0.69 MeV and 0.59 MeV respectively, and the charge radii of 345 even-even nuclei with a mean error ɛr=0.022 fm and a standard deviation σr=0.025 fm . SeaLL1 incorporates constraints on the equation of state (EoS) of pure neutron matter from quantum Monte Carlo calculations with chiral effective field theory two-body (NN ) interactions at the next-to-next-to-next-to leading order (N3LO) level and three-body (NNN ) interactions at the next-to-next-to leading order (N2LO) level. Two of the seven parameters are related to the saturation density and the energy per particle of the homogeneous symmetric nuclear matter, one is related to the nuclear surface tension, two are related to the symmetry energy and its density dependence, one is related to the strength of the spin-orbit interaction, and one is the coupling constant of the pairing interaction. We identify additional phenomenological parameters that have little effect on ground-state properties but can be used to fine-tune features such as the Thomas-Reiche-Kuhn sum rule, the excitation energy of the giant dipole and Gamow-Teller resonances, the static dipole electric polarizability, and the neutron skin thickness.

  4. Single-particle energies and density of states in density functional theory

    Science.gov (United States)

    van Aggelen, H.; Chan, G. K.-L.

    2015-07-01

    Time-dependent density functional theory (TD-DFT) is commonly used as the foundation to obtain neutral excited states and transition weights in DFT, but does not allow direct access to density of states and single-particle energies, i.e. ionisation energies and electron affinities. Here we show that by extending TD-DFT to a superfluid formulation, which involves operators that break particle-number symmetry, we can obtain the density of states and single-particle energies from the poles of an appropriate superfluid response function. The standard Kohn- Sham eigenvalues emerge as the adiabatic limit of the superfluid response under the assumption that the exchange- correlation functional has no dependence on the superfluid density. The Kohn- Sham eigenvalues can thus be interpreted as approximations to the ionisation energies and electron affinities. Beyond this approximation, the formalism provides an incentive for creating a new class of density functionals specifically targeted at accurate single-particle eigenvalues and bandgaps.

  5. Interpretation of the results of statistical measurements. [search for basic probability model

    Science.gov (United States)

    Olshevskiy, V. V.

    1973-01-01

    For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.

  6. On the efficiency of high-energy particle identification statistical methods

    International Nuclear Information System (INIS)

    Chilingaryan, A.A.

    1982-01-01

    An attempt is made to analyze the statistical methods of making decisions on the high-energy particle identification. The Bayesian approach is shown to provide the most complete account of the primary discriminative information between the particles of various tupes. It does not impose rigid requirements on the density form of the probability function and ensures the account of the a priori information as compared with the Neyman-Pearson approach, the mimimax technique and the heristic rules of the decision limits construction in the variant region of the specially chosen parameter. The methods based on the concept of the nearest neighbourhood are shown to be the most effective one among the local methods of the probability function density estimation. The probability distances between the training sample classes are suggested to make a decision on selecting the high-energy particle detector optimal parameters. The method proposed and the software constructed are tested on the problem of the cosmic radiation hadron identification by means of transition radiation detectors (the ''PION'' experiment)

  7. Production of 147Eu for gamma-ray emission probability measurement

    International Nuclear Information System (INIS)

    Katoh, Keiji; Marnada, Nada; Miyahara, Hiroshi

    2002-01-01

    Gamma-ray emission probability is one of the most important decay parameters of radionuclide and many researchers are paying efforts to improve the certainty of it. The certainties of γ-ray emission probabilities for neutron-rich nuclides are being improved little by little, but the improvements of those for proton-rich nuclides are still insufficient. Europium-147 that decays by electron capture or β + -particle emission is a proton-rich nuclide and the γ-ray emission probabilities evaluated by Mateosian and Peker have large uncertainties. They referred to only one report concerning with γ-ray emission probabilities. Our final purpose is to determine the precise γ-ray emission probabilities of 147 Eu from disintegration rates and γ-ray intensities by using a 4πβ-γ coincidence apparatus. Impurity nuclides affect largely to the determination of disintegration rate; therefore, a highly pure 147 Eu source is required. This short note will describe the most proper energy for 147 Eu production through 147 Sm(p, n) reaction. (author)

  8. Probability density function modeling of scalar mixing from concentrated sources in turbulent channel flow

    Science.gov (United States)

    Bakosi, J.; Franzese, P.; Boybeyi, Z.

    2007-11-01

    Dispersion of a passive scalar from concentrated sources in fully developed turbulent channel flow is studied with the probability density function (PDF) method. The joint PDF of velocity, turbulent frequency and scalar concentration is represented by a large number of Lagrangian particles. A stochastic near-wall PDF model combines the generalized Langevin model of Haworth and Pope [Phys. Fluids 29, 387 (1986)] with Durbin's [J. Fluid Mech. 249, 465 (1993)] method of elliptic relaxation to provide a mathematically exact treatment of convective and viscous transport with a nonlocal representation of the near-wall Reynolds stress anisotropy. The presence of walls is incorporated through the imposition of no-slip and impermeability conditions on particles without the use of damping or wall-functions. Information on the turbulent time scale is supplied by the gamma-distribution model of van Slooten et al. [Phys. Fluids 10, 246 (1998)]. Two different micromixing models are compared that incorporate the effect of small scale mixing on the transported scalar: the widely used interaction by exchange with the mean and the interaction by exchange with the conditional mean model. Single-point velocity and concentration statistics are compared to direct numerical simulation and experimental data at Reτ=1080 based on the friction velocity and the channel half width. The joint model accurately reproduces a wide variety of conditional and unconditional statistics in both physical and composition space.

  9. Unifying distribution functions: some lesser known distributions.

    Science.gov (United States)

    Moya-Cessa, J R; Moya-Cessa, H; Berriel-Valdos, L R; Aguilar-Loreto, O; Barberis-Blostein, P

    2008-08-01

    We show that there is a way to unify distribution functions that describe simultaneously a classical signal in space and (spatial) frequency and position and momentum for a quantum system. Probably the most well known of them is the Wigner distribution function. We show how to unify functions of the Cohen class, Rihaczek's complex energy function, and Husimi and Glauber-Sudarshan distribution functions. We do this by showing how they may be obtained from ordered forms of creation and annihilation operators and by obtaining them in terms of expectation values in different eigenbases.

  10. Influence of nucleon density distribution in nucleon emission probability

    International Nuclear Information System (INIS)

    Paul, Sabyasachi; Nandy, Maitreyee; Mohanty, A.K.; Sarkar, P.K.; Gambhir, Y.K.

    2014-01-01

    Different decay modes are observed in heavy ion reactions at low to intermediate energies. It is interesting to study total neutron emission in these reactions which may be contributed by all/many of these decay modes. In an attempt to understand the importance of mean field and the entrance channel angular momentum, we study their influence on the emission probability of nucleons in heavy ion reactions in this work. This study owes its significance to the fact that once population of different states are determined, emission probability governs the double differential neutron yield

  11. Analysis of the Bogoliubov free energy functional

    DEFF Research Database (Denmark)

    Reuvers, Robin

    In this thesis, we analyse a variational reformulation of the Bogoliubov approximation that is used to describe weakly-interacting translationally-invariant Bose gases. For the resulting model, the `Bogoliubov free energy functional', we demonstrate existence of minimizers as well as the presence...

  12. Multiple parton scattering in nuclei: heavy quark energy loss and modified fragmentation functions

    International Nuclear Information System (INIS)

    Zhang Benwei; Wang, Enke; Wang Xinnian

    2005-01-01

    Multiple scattering, induced radiative energy loss and modified fragmentation functions of a heavy quark in nuclear matter are studied within the framework of generalized factorization in perturbative QCD. Modified heavy quark fragmentation functions and energy loss are derived in detail with illustration of the mass dependencies of the Landau-Pomeranchuk-Migdal interference effects and heavy quark energy loss. Due to the quark mass dependence of the gluon formation time, the nuclear size dependencies of nuclear modification of the heavy quark fragmentation function and heavy quark energy loss are found to change from a linear to a quadratic form when the initial energy and momentum scale are increased relative to the quark mass. The radiative energy loss of the heavy quark is also significantly suppressed due to limited cone of gluon radiation imposed by the mass. Medium modification of the heavy quark fragmentation functions is found to be limited to the large z region due to the form of heavy quark fragmentation functions in vacuum

  13. Structure and potential energy function for Pu22+ ion

    International Nuclear Information System (INIS)

    Li Quan; Huang Hui; Li Daohua

    2003-01-01

    The theoretical study on Pu 2 2+ using density functional method shows that the molecular ion is metastable. Ground electronic state is 13 Σ g for Pu 2 2+ , the analytic potential energy function is in well agreement with the Z-W function, and the force constants and spectroscopic data have been worked out for the first time

  14. The Bogoliubov free energy functional II

    DEFF Research Database (Denmark)

    Napiórkowski, Marcin; Reuvers, Robin; Solovej, Jan Philip

    2018-01-01

    We analyse the canonical Bogoliubov free energy functional at low temperatures in the dilute limit. We prove existence of a first order phase transition and, in the limit $a_0\\to a$, we determine the critical temperature to be $T_{\\rm{c}}=T_{\\rm{fc}}(1+1.49(\\rho^{1/3}a))$ to leading order. Here, $T......_{\\rm{fc}}$ is the critical temperature of the free Bose gas, $\\rho$ is the density of the gas, $a$ is the scattering length of the pair-interaction potential $V$, and $a_0=(8\\pi)^{-1}\\widehat{V}(0)$ its first order approximation. We also prove asymptotic expansions for the free energy. In particular, we recover the Lee...

  15. Energy vs. density on paths toward more exact density functionals.

    Science.gov (United States)

    Kepp, Kasper P

    2018-03-14

    Recently, the progression toward more exact density functional theory has been questioned, implying a need for more formal ways to systematically measure progress, i.e. a "path". Here I use the Hohenberg-Kohn theorems and the definition of normality by Burke et al. to define a path toward exactness and "straying" from the "path" by separating errors in ρ and E[ρ]. A consistent path toward exactness involves minimizing both errors. Second, a suitably diverse test set of trial densities ρ' can be used to estimate the significance of errors in ρ without knowing the exact densities which are often inaccessible. To illustrate this, the systems previously studied by Medvedev et al., the first ionization energies of atoms with Z = 1 to 10, the ionization energy of water, and the bond dissociation energies of five diatomic molecules were investigated using CCSD(T)/aug-cc-pV5Z as benchmark at chemical accuracy. Four functionals of distinct designs was used: B3LYP, PBE, M06, and S-VWN. For atomic cations regardless of charge and compactness up to Z = 10, the energy effects of the different ρ are energy-wise insignificant. An interesting oscillating behavior in the density sensitivity is observed vs. Z, explained by orbital occupation effects. Finally, it is shown that even large "normal" problems such as the Co-C bond energy of cobalamins can use simpler (e.g. PBE) trial densities to drastically speed up computation by loss of a few kJ mol -1 in accuracy. The proposed method of using a test set of trial densities to estimate the sensitivity and significance of density errors of functionals may be useful for testing and designing new balanced functionals with more systematic improvement of densities and energies.

  16. Probability 1/e

    Science.gov (United States)

    Koo, Reginald; Jones, Martin L.

    2011-01-01

    Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.

  17. The maximum entropy method of moments and Bayesian probability theory

    Science.gov (United States)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  18. The paradoxical effect of low reward probabilities in suboptimal choice.

    Science.gov (United States)

    Fortes, Inês; Pinto, Carlos; Machado, Armando; Vasconcelos, Marco

    2018-04-01

    When offered a choice between 2 alternatives, animals sometimes prefer the option yielding less food. For instance, pigeons and starlings prefer an option that on 20% of the trials presents a stimulus always followed by food, and on the remaining 80% of the trials presents a stimulus never followed by food (the Informative Option), over an option that provides food on 50% of the trials regardless of the stimulus presented (the Noninformative Option). To explain this suboptimal behavior, it has been hypothesized that animals ignore (or do not engage with) the stimulus that is never followed by food in the Informative Option. To assess when pigeons attend to the stimulus usually not followed by food, we increased the probability of reinforcement, p, in the presence of that stimulus. Across 2 experiments, we found that the value of the Informative Option decreased with p. To account for the results, we added to the Reinforcement Rate Model (and also to the Hyperbolic Discounting Model) an engagement function, f(p), that specified the likelihood the animal attends to a stimulus followed by reward with probability p, and then derived the model predictions for 2 forms of f(p), a linear function, and an all-or-none threshold function. Both models predicted the observed findings with a linear engagement function: The higher the probability of reinforcement after a stimulus, the higher the probability of engaging the stimulus, and, surprisingly, the less the value of the option comprising the stimulus. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  19. Computation of Probabilities in Causal Models of History of Science

    Directory of Open Access Journals (Sweden)

    Osvaldo Pessoa Jr.

    2006-12-01

    Full Text Available : The aim of this paper is to investigate the ascription of probabilities in a causal model of an episode in the history of science. The aim of such a quantitative approach is to allow the implementation of the causal model in a computer, to run simulations. As an example, we look at the beginning of the science of magnetism, “explaining” — in a probabilistic way, in terms of a single causal model — why the field advanced in China but not in Europe (the difference is due to different prior probabilities of certain cultural manifestations. Given the number of years between the occurrences of two causally connected advances X and Y, one proposes a criterion for stipulating the value pY=X of the conditional probability of an advance Y occurring, given X. Next, one must assume a specific form for the cumulative probability function pY=X(t, which we take to be the time integral of an exponential distribution function, as is done in physics of radioactive decay. Rules for calculating the cumulative functions for more than two events are mentioned, involving composition, disjunction and conjunction of causes. We also consider the problems involved in supposing that the appearance of events in time follows an exponential distribution, which are a consequence of the fact that a composition of causes does not follow an exponential distribution, but a “hypoexponential” one. We suggest that a gamma distribution function might more adequately represent the appearance of advances.

  20. Momentum Probabilities for a Single Quantum Particle in Three-Dimensional Regular "Infinite" Wells: One Way of Promoting Understanding of Probability Densities

    Science.gov (United States)

    Riggs, Peter J.

    2013-01-01

    Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…

  1. Energy and energy gradient matrix elements with N-particle explicitly correlated complex Gaussian basis functions with L =1

    Science.gov (United States)

    Bubin, Sergiy; Adamowicz, Ludwik

    2008-03-01

    In this work we consider explicitly correlated complex Gaussian basis functions for expanding the wave function of an N-particle system with the L =1 total orbital angular momentum. We derive analytical expressions for various matrix elements with these basis functions including the overlap, kinetic energy, and potential energy (Coulomb interaction) matrix elements, as well as matrix elements of other quantities. The derivatives of the overlap, kinetic, and potential energy integrals with respect to the Gaussian exponential parameters are also derived and used to calculate the energy gradient. All the derivations are performed using the formalism of the matrix differential calculus that facilitates a way of expressing the integrals in an elegant matrix form, which is convenient for the theoretical analysis and the computer implementation. The new method is tested in calculations of two systems: the lowest P state of the beryllium atom and the bound P state of the positronium molecule (with the negative parity). Both calculations yielded new, lowest-to-date, variational upper bounds, while the number of basis functions used was significantly smaller than in previous studies. It was possible to accomplish this due to the use of the analytic energy gradient in the minimization of the variational energy.

  2. Energy and energy gradient matrix elements with N-particle explicitly correlated complex Gaussian basis functions with L=1.

    Science.gov (United States)

    Bubin, Sergiy; Adamowicz, Ludwik

    2008-03-21

    In this work we consider explicitly correlated complex Gaussian basis functions for expanding the wave function of an N-particle system with the L=1 total orbital angular momentum. We derive analytical expressions for various matrix elements with these basis functions including the overlap, kinetic energy, and potential energy (Coulomb interaction) matrix elements, as well as matrix elements of other quantities. The derivatives of the overlap, kinetic, and potential energy integrals with respect to the Gaussian exponential parameters are also derived and used to calculate the energy gradient. All the derivations are performed using the formalism of the matrix differential calculus that facilitates a way of expressing the integrals in an elegant matrix form, which is convenient for the theoretical analysis and the computer implementation. The new method is tested in calculations of two systems: the lowest P state of the beryllium atom and the bound P state of the positronium molecule (with the negative parity). Both calculations yielded new, lowest-to-date, variational upper bounds, while the number of basis functions used was significantly smaller than in previous studies. It was possible to accomplish this due to the use of the analytic energy gradient in the minimization of the variational energy.

  3. Uncertainties propagation and global sensitivity analysis of the frequency response function of piezoelectric energy harvesters

    Science.gov (United States)

    Ruiz, Rafael O.; Meruane, Viviana

    2017-06-01

    The goal of this work is to describe a framework to propagate uncertainties in piezoelectric energy harvesters (PEHs). These uncertainties are related to the incomplete knowledge of the model parameters. The framework presented could be employed to conduct prior robust stochastic predictions. The prior analysis assumes a known probability density function for the uncertain variables and propagates the uncertainties to the output voltage. The framework is particularized to evaluate the behavior of the frequency response functions (FRFs) in PEHs, while its implementation is illustrated by the use of different unimorph and bimorph PEHs subjected to different scenarios: free of uncertainties, common uncertainties, and uncertainties as a product of imperfect clamping. The common variability associated with the PEH parameters are tabulated and reported. A global sensitivity analysis is conducted to identify the Sobol indices. Results indicate that the elastic modulus, density, and thickness of the piezoelectric layer are the most relevant parameters of the output variability. The importance of including the model parameter uncertainties in the estimation of the FRFs is revealed. In this sense, the present framework constitutes a powerful tool in the robust design and prediction of PEH performance.

  4. A Derivation of Probabilities of Correct and Wrongful Conviction in a Criminal Trial

    DEFF Research Database (Denmark)

    Lando, Henrik

    2006-01-01

    probabilities are the probability of observing (any given) evidence against individual i given that individual j committed the crime (for any j including j equal to i). The variables are derived from the conditional probabilities as a function of the standard of the proof using simple Bayesian updating....

  5. Development of multi-functional nano-paint for energy harvesting applications

    Directory of Open Access Journals (Sweden)

    Bir B. Bohara

    2018-02-01

    Full Text Available The multi-functionality of lead magnesium niobate-lead titanate/paint (PMN-PT/paint nanocomposite films for energy harvesting via piezoelectric and pyroelectric effects is described. PMN-PT/paint films have been fabricated by a conventional paint-brushing technique to provide a low-cost, low-temperature and low–energy route to create multi-functional films. The properties investigated included dielectric constants, ε' and ε'', as a function of temperature, frequency and composition. From these parameters, it is indicated that the dielectric constants and AC conductivity (σAC increase with an increase of filler content and temperature, implying an improvement of the functionality of the films. The results revealed that σAC obeyed the relation σAC = Aωs, and exponent s, was found to decrease by increasing the temperature. The correlated barrier hopping was the dominant conduction mechanism in the nanocomposite films. The efforts were made to investigate the performance of nanocomposite films to mechanical vibrations and thermal variations. A cantilever system was designed and examined to assess its performance as energy harvesters. The highest output voltage and power for a PMN-PT/paint based harvester with a broad frequency response operating in the -31-piezoelectric mode were 65 mV and 1 nW, respectively. Voltage and power were shown to be enhanced by application of thermal variations. Thus, films could be utilized for combined energy harvesting via piezoelectric and pyroelectric characteristics. Keywords: Dielectric, Pyroelectricity, Piezoelectricity, Nanocomposites, PMN-PT, Energy harvesting

  6. A Game for Energy-Aware Allocation of Virtualized Network Functions

    Directory of Open Access Journals (Sweden)

    Roberto Bruschi

    2016-01-01

    Full Text Available Network Functions Virtualization (NFV is a network architecture concept where network functionality is virtualized and separated into multiple building blocks that may connect or be chained together to implement the required services. The main advantages consist of an increase in network flexibility and scalability. Indeed, each part of the service chain can be allocated and reallocated at runtime depending on demand. In this paper, we present and evaluate an energy-aware Game-Theory-based solution for resource allocation of Virtualized Network Functions (VNFs within NFV environments. We consider each VNF as a player of the problem that competes for the physical network node capacity pool, seeking the minimization of individual cost functions. The physical network nodes dynamically adjust their processing capacity according to the incoming workload, by means of an Adaptive Rate (AR strategy that aims at minimizing the product of energy consumption and processing delay. On the basis of the result of the nodes’ AR strategy, the VNFs’ resource sharing costs assume a polynomial form in the workflows, which admits a unique Nash Equilibrium (NE. We examine the effect of different (unconstrained and constrained forms of the nodes’ optimization problem on the equilibrium and compare the power consumption and delay achieved with energy-aware and non-energy-aware strategy profiles.

  7. Four-point correlation function of stress-energy tensors in N=4 superconformal theories

    CERN Document Server

    Korchemsky, G P

    2015-01-01

    We derive the explicit expression for the four-point correlation function of stress-energy tensors in four-dimensional N=4 superconformal theory. We show that it has a remarkably simple and suggestive form allowing us to predict a large class of four-point correlation functions involving the stress-energy tensor and other conserved currents. We then apply the obtained results on the correlation functions to computing the energy-energy correlations, which measure the flow of energy in the final states created from the vacuum by a source. We demonstrate that they are given by a universal function independent of the choice of the source. Our analysis relies only on N=4 superconformal symmetry and does not use the dynamics of the theory.

  8. Estimating deficit probabilities with price-responsive demand in contract-based electricity markets

    International Nuclear Information System (INIS)

    Galetovic, Alexander; Munoz, Cristian M.

    2009-01-01

    Studies that estimate deficit probabilities in hydrothermal systems have generally ignored the response of demand to changing prices, in the belief that such response is largely irrelevant. We show that ignoring the response of demand to prices can lead to substantial over or under estimation of the probability of an energy deficit. To make our point we present an estimation of deficit probabilities in Chile's Central Interconnected System between 2006 and 2010. This period is characterized by tight supply, fast consumption growth and rising electricity prices. When the response of demand to rising prices is acknowledged, forecasted deficit probabilities and marginal costs are shown to be substantially lower

  9. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    Science.gov (United States)

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  10. Replacing leads by self-energies using non-equilibrium Green's functions

    International Nuclear Information System (INIS)

    Michael, Fredrick; Johnson, M.D.

    2003-01-01

    Open quantum systems consist of semi-infinite leads which transport electrons to and from the device of interest. We show here that within the non-equilibrium Green's function technique for continuum systems, the leads can be replaced by simple c-number self-energies. Our starting point is an approach for continuum systems developed by Feuchtwang. The reformulation developed here is simpler to understand and carry out than the somewhat unwieldly manipulations typical in the Feuchtwang method. The self-energies turn out to have a limited variability: the retarded self-energy Σ r depends on the arbitrary choice of internal boundary conditions, but the non-equilibrium self-energy or scattering function Σ which determines transport is invariant for a broad class of boundary conditions. Expressed in terms of these self-energies, continuum non-equilibrium transport calculations take a particularly simple form similar to that developed for discrete systems

  11. Nuclear energy density functional from chiral pion-nucleon dynamics revisited

    OpenAIRE

    Kaiser, N.; Weise, W.

    2009-01-01

    We use a recently improved density-matrix expansion to calculate the nuclear energy density functional in the framework of in-medium chiral perturbation theory. Our calculation treats systematically the effects from $1\\pi$-exchange, iterated $1\\pi$-exchange, and irreducible $2\\pi$-exchange with intermediate $\\Delta$-isobar excitations, including Pauli-blocking corrections up to three-loop order. We find that the effective nucleon mass $M^*(\\rho)$ entering the energy density functional is iden...

  12. The probability distribution of the delay time of a wave packet in strong overlap of resonance levels

    International Nuclear Information System (INIS)

    Lyuboshitz, V.L.

    1982-01-01

    The time development of nuclear reactions at a large density of levels is investigated using the theory of overlapping resonances. The analytical expression for the function describing the time delay probability distribution of a wave packet is obtained in the framework of the model of n equi - valent channels. It is shown that a relative fluctuation of the time delay at the stage of the compound nucleus is snall. The possibility is discussed of increasing the duration of nuclear raactions with rising excitation energy

  13. Some applications of the fractional Poisson probability distribution

    International Nuclear Information System (INIS)

    Laskin, Nick

    2009-01-01

    Physical and mathematical applications of the recently invented fractional Poisson probability distribution have been presented. As a physical application, a new family of quantum coherent states has been introduced and studied. As mathematical applications, we have developed the fractional generalization of Bell polynomials, Bell numbers, and Stirling numbers of the second kind. The appearance of fractional Bell polynomials is natural if one evaluates the diagonal matrix element of the evolution operator in the basis of newly introduced quantum coherent states. Fractional Stirling numbers of the second kind have been introduced and applied to evaluate the skewness and kurtosis of the fractional Poisson probability distribution function. A representation of the Bernoulli numbers in terms of fractional Stirling numbers of the second kind has been found. In the limit case when the fractional Poisson probability distribution becomes the Poisson probability distribution, all of the above listed developments and implementations turn into the well-known results of the quantum optics and the theory of combinatorial numbers.

  14. Evaluation of NEB energy markets and supply monitoring function

    International Nuclear Information System (INIS)

    2003-09-01

    Canada's National Energy Board regulates the exports of oil, natural gas, natural gas liquids and electricity. It also regulates the construction, operation and tolls of international and interprovincial pipelines and power lines. It also monitors energy supply and market developments in Canada. The Board commissioned an evaluation of the monitoring function to ensure the effectiveness and efficiency of the monitoring activities, to identify gaps in these activities and to propose recommendations. The objectives of the monitoring mandate are to provide Canadians with information regarding Canadian energy markets, energy supply and demand, and to ensure that exports of natural gas, oil, natural gas liquids and electricity do not occur at the detriment of Canadian energy users. The Board ensures that Canadians have access to domestically produced energy on terms that are as favourable as those available to export buyers. The following recommendations were proposed to improve the monitoring of energy markets and supply: (1) increase focus and analysis on the functioning of gas (first priority) and other commodity markets, (2) increase emphasis on forward-looking market analysis and issue identification, (3) demonstrate continued leadership by encouraging public dialogue on a wide range of energy market issues, (4) improve communication and increase visibility of the NEB within the stakeholder community, (5) build on knowledge management and organizational learning capabilities, (6) improve communication and sharing of information between the Applications and Commodities Business Units, and (7) enhance organizational effectiveness of the Commodities Business Unit. figs

  15. Interaction probability value calculi for some scintillators

    International Nuclear Information System (INIS)

    Garcia-Torano Martinez, E.; Grau Malonda, A.

    1989-01-01

    Interaction probabilities for 17 gamma-ray energies between 1 and 1.000 KeV have been computed and tabulated. The tables may be applied to the case of cylindrical vials with radius 1,25 cm and volumes 5, 10 and 15 ml. Toluene, Toluene/Alcohol, Dioxane-Naftalen, PCS, INSTAGEL and HISAFE II scintillators are considered. Graphical results for 10 ml are also given. (Author) 11 refs

  16. Probability distribution functions for ELM bursts in a series of JET tokamak discharges

    International Nuclear Information System (INIS)

    Greenhough, J; Chapman, S C; Dendy, R O; Ward, D J

    2003-01-01

    A novel statistical treatment of the full raw edge localized mode (ELM) signal from a series of previously studied JET plasmas is tested. The approach involves constructing probability distribution functions (PDFs) for ELM amplitudes and time separations, and quantifying the fit between the measured PDFs and model distributions (Gaussian, inverse exponential) and Poisson processes. Uncertainties inherent in the discreteness of the raw signal require the application of statistically rigorous techniques to distinguish ELM data points from background, and to extrapolate peak amplitudes. The accuracy of PDF construction is further constrained by the relatively small number of ELM bursts (several hundred) in each sample. In consequence the statistical technique is found to be difficult to apply to low frequency (typically Type I) ELMs, so the focus is narrowed to four JET plasmas with high frequency (typically Type III) ELMs. The results suggest that there may be several fundamentally different kinds of Type III ELMing process at work. It is concluded that this novel statistical treatment can be made to work, may have wider applications to ELM data, and has immediate practical value as an additional quantitative discriminant between classes of ELMing behaviour

  17. Range and energy functions of interest in neutron dosimetry

    International Nuclear Information System (INIS)

    Bhatia, D.P.; Nagarajan, P.S.

    1978-01-01

    This report documents the energy and range functions generated and used in fast neutron interface dosimetry studies. The basic data of stopping power employed are the most recent. The present report covers a number of media mainly air, oxygen, nitrogen, polythene, graphite, bone and tissue, and a number of charged particles, namely protons, alphas, 9 Be, 11 B, 12 C, 13 C, 14 N and 16 O. These functions would be useful for generation of energy and range values for any of the above particles in any of the above media within +- 1% in any dosimetric calculations. (author)

  18. Lectures on probability and statistics

    International Nuclear Information System (INIS)

    Yost, G.P.

    1984-09-01

    These notes are based on a set of statistics lectures delivered at Imperial College to the first-year postgraduate students in High Energy Physics. They are designed for the professional experimental scientist. We begin with the fundamentals of probability theory, in which one makes statements about the set of possible outcomes of an experiment, based upon a complete a priori understanding of the experiment. For example, in a roll of a set of (fair) dice, one understands a priori that any given side of each die is equally likely to turn up. From that, we can calculate the probability of any specified outcome. We finish with the inverse problem, statistics. Here, one begins with a set of actual data (e.g., the outcomes of a number of rolls of the dice), and attempts to make inferences about the state of nature which gave those data (e.g., the likelihood of seeing any given side of any given die turn up). This is a much more difficult problem, of course, and one's solutions often turn out to be unsatisfactory in one respect or another

  19. Selected papers on analysis, probability, and statistics

    CERN Document Server

    Nomizu, Katsumi

    1994-01-01

    This book presents papers that originally appeared in the Japanese journal Sugaku. The papers fall into the general area of mathematical analysis as it pertains to probability and statistics, dynamical systems, differential equations and analytic function theory. Among the topics discussed are: stochastic differential equations, spectra of the Laplacian and Schrödinger operators, nonlinear partial differential equations which generate dissipative dynamical systems, fractal analysis on self-similar sets and the global structure of analytic functions.

  20. Many-body theory and Energy Density Functionals

    Energy Technology Data Exchange (ETDEWEB)

    Baldo, M. [INFN, Catania (Italy)

    2016-07-15

    In this paper a method is first presented to construct an Energy Density Functional on a microscopic basis. The approach is based on the Kohn-Sham method, where one introduces explicitly the Nuclear Matter Equation of State, which can be obtained by an accurate many-body calculation. In this way it connects the functional to the bare nucleon-nucleon interaction. It is shown that the resulting functional can be performing as the best Gogny force functional. In the second part of the paper it is shown how one can go beyond the mean-field level and the difficulty that can appear. The method is based on the particle-vibration coupling scheme and a formalism is presented that can handle the correct use of the vibrational degrees of freedom within a microscopic approach. (orig.)

  1. On the ratio probability of the smallest eigenvalues in the Laguerre unitary ensemble

    Science.gov (United States)

    Atkin, Max R.; Charlier, Christophe; Zohren, Stefan

    2018-04-01

    We study the probability distribution of the ratio between the second smallest and smallest eigenvalue in the Laguerre unitary ensemble. The probability that this ratio is greater than r  >  1 is expressed in terms of an Hankel determinant with a perturbed Laguerre weight. The limiting probability distribution for the ratio as is found as an integral over containing two functions q 1(x) and q 2(x). These functions satisfy a system of two coupled Painlevé V equations, which are derived from a Lax pair of a Riemann-Hilbert problem. We compute asymptotic behaviours of these functions as and , as well as large n asymptotics for the associated Hankel determinants in several regimes of r and x.

  2. K-shell ionization probability in energetic nearly symmetric heavy-ion collisions

    International Nuclear Information System (INIS)

    Tserruya, I.; Schmidt-Boecking, H.; Schuch, R.

    1977-01-01

    Impact parameter dependent K-x-ray emission probabilities for the projectile and target atoms have been measured in 35 MeV Cl on Cl, Cl on Ti and Cl on Ni collisions. The sum of projectile plus target K-shell ionization probability is taken as a measure of the total 2psigma ionization probability. The 2pπ-2psigma totational coupling model is in clear disagreement with the present results. On the other hand the sum of probabilities is reproduced both in shape and absolute magnitude by the statistical model for inner-shell ionization. The K-shell ionization probability of the higher -Z collision partner is well described by this model including the 2psigma-1ssigma vacancy sharing probability calculated as a function of the impact parameter. (author)

  3. Application of probability generating function to the essentials of nondestructive nuclear materials assay system using neutron correlation

    International Nuclear Information System (INIS)

    Hosoma, Takashi

    2017-01-01

    In the previous research (JAEA-Research 2015-009), essentials of neutron multiplicity counting mathematics were reconsidered where experiences obtained at the Plutonium Conversion Development Facility were taken into, and formulae of multiplicity distribution were algebraically derived up to septuplet using a probability generating function to make a strategic move in the future. Its principle was reported by K. Böhnel in 1985, but such a high-order expansion was the first case due to its increasing complexity. In this research, characteristics of the high-order correlation were investigated. It was found that higher-order correlation increases rapidly in response to the increase of leakage multiplication, crosses and leaves lower-order correlations behind, when leakage multiplication is > 1.3 that depends on detector efficiency and counter setting. In addition, fission rates and doubles count rates by fast neutron and by thermal neutron in their coexisting system were algebraically derived using a probability generating function again. Its principle was reported by I. Pázsit and L. Pál in 2012, but such a physical interpretation, i.e. associating their stochastic variables with fission rate, doubles count rate and leakage multiplication, is the first case. From Rossi-alpha combined distribution and measured ratio of each area obtained by Differential Die-Away Self-Interrogation (DDSI) and conventional assay data, it is possible to estimate: the number of induced fissions per unit time by fast neutron and by thermal neutron; the number of induced fissions (< 1) by one source neutron; and individual doubles count rates. During the research, a hypothesis introduced in their report was proved to be true. Provisional calculations were done for UO_2 of 1∼10 kgU containing ∼ 0.009 wt% "2"4"4Cm. (author)

  4. Characterization of a material by probability of linear scattering using effect of target thickness

    International Nuclear Information System (INIS)

    Nghiep, T.D.; Khai, N.T.; Cong, N.T.; Minh, D.T.N.

    2013-01-01

    We report on an experimental test with 662 keV gamma photons scattered from a set of samples from 6 C, 13 Al, 26 Fe, 29 Cu, 47 Ag, 82 Pb and stainless steel for determination of probability of linear scattering, which can be used for characterization of a material. The results show that for the given target and scattering angle, the effect of target thickness in gamma photons scattering relates to single and multiple scattering and that the scattered events exponentially increase with an increase in target thickness and saturation at some values of thickness. The experimental results correlate with the typical function of energy transfer model. (author)

  5. On the method of logarithmic cumulants for parametric probability density function estimation.

    Science.gov (United States)

    Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

    2013-10-01

    Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible.

  6. Non-Archimedean Probability

    NARCIS (Netherlands)

    Benci, Vieri; Horsten, Leon; Wenmackers, Sylvia

    We propose an alternative approach to probability theory closely related to the framework of numerosity theory: non-Archimedean probability (NAP). In our approach, unlike in classical probability theory, all subsets of an infinite sample space are measurable and only the empty set gets assigned

  7. Protein single-model quality assessment by feature-based probability density functions.

    Science.gov (United States)

    Cao, Renzhi; Cheng, Jianlin

    2016-04-04

    Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob.

  8. Handbook of probability

    CERN Document Server

    Florescu, Ionut

    2013-01-01

    THE COMPLETE COLLECTION NECESSARY FOR A CONCRETE UNDERSTANDING OF PROBABILITY Written in a clear, accessible, and comprehensive manner, the Handbook of Probability presents the fundamentals of probability with an emphasis on the balance of theory, application, and methodology. Utilizing basic examples throughout, the handbook expertly transitions between concepts and practice to allow readers an inclusive introduction to the field of probability. The book provides a useful format with self-contained chapters, allowing the reader easy and quick reference. Each chapter includes an introductio

  9. Probability Machines: Consistent Probability Estimation Using Nonparametric Learning Machines

    Science.gov (United States)

    Malley, J. D.; Kruppa, J.; Dasgupta, A.; Malley, K. G.; Ziegler, A.

    2011-01-01

    Summary Background Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. Objectives The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Methods Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Results Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Conclusions Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications. PMID:21915433

  10. The Effect of High Frequency Pulse on the Discharge Probability in Micro EDM

    Science.gov (United States)

    Liu, Y.; Qu, Y.; Zhang, W.; Ma, F.; Sha, Z.; Wang, Y.; Rolfe, B.; Zhang, S.

    2017-12-01

    High frequency pulse improves the machining efficiency of micro electric discharge machining (micro EDM), while it also brings some changes in micro EDM process. This paper focuses on the influence of skin-effect under the high frequency pulse on energy distribution and transmission in micro EDM, based on which, the rules of discharge probability of electrode end face are also analysed. On the basis of the electrical discharge process under the condition of high frequency pulse in micro EDM, COMSOL Multiphysics software is used to establish energy transmission model in micro electrode. The discharge energy distribution and transmission within tool electrode under different pulse frequencies, electrical currents, and permeability situation are studied in order to get the distribution pattern of current density and electric field intensity in the electrode end face under the influence of electrical parameters change. The electric field intensity distribution is regarded as the influencing parameter of discharge probability on the electrode end. Finally, MATLAB is used to fit the curve and obtain the distribution of discharge probability of electrode end face.

  11. COULN, a program for evaluating negative energy Coulomb functions

    International Nuclear Information System (INIS)

    Noble, C.J.; Thompson, I.J.

    1984-01-01

    Program COULN calculates exponentially decaying Whittaker functions, Wsub(K,μ)(z) corresponding to negative energy Coulomb functions. The method employed is most appropriate for parameter ranges which commonly occur in atomic and molecular asymptotic scattering problems using a close-coupling approximation in the presence of closed channels. (orig.)

  12. Casimir energies in M4≥/sup N/ for even N. Green's-function and zeta-function techniques

    International Nuclear Information System (INIS)

    Kantowski, R.; Milton, K.A.

    1987-01-01

    The Green's-function technique developed in the first paper in this series is generalized to apply to massive scalar, vector, second-order tensor, and Dirac spinor fields, as a preliminary to a full graviton calculation. The Casimir energies are of the form u/sub Casimir/ = (1/a 4 )[α/sub N/lna/b)+β/sub N/], where N (even) is the dimension of the internal sphere, a is its radius, and b/sup -1/ is an ultraviolet cutoff (presumably at the Planck scale). The coefficient of the divergent logarithm, α/sub N/, is unambiguously obtained for each field considered. The Green's-function technique gives rise to no difficulties in the evaluation of imaginary-mass-mode contributions to the Casimir energy. In addition, a new, simplified zeta-function technique is presented which is very easily implemented by symbolic programs, and which, of course, gives the same results. An error in a previous zeta-function calculation of the Casimir energy for even N is pointed out

  13. Proximity approach to study fusion probabilities in heavy-ion collisions

    International Nuclear Information System (INIS)

    Raj Kumari

    2013-01-01

    The fusion cross-sections at the sub-barrier energies are found to be enhanced compared to the predictions of the barrier penetration model. The aim is to test Bass 80, Aage Winther (AW) 95, Denisov DP, Proximity 2010 and Skyrme Energy Density Formalism (SEDF) at energies above as well as below barrier height. For the present systematic study, the fusion probabilities for the reactions of 28 Si+ 24,26 Mg 30 Si+ 24 Mg and 28,30 Si+ 58,62 Ni have been calculated

  14. A Cellular Perspective on Brain Energy Metabolism and Functional Imaging

    KAUST Repository

    Magistretti, Pierre J.; Allaman, Igor

    2015-01-01

    The energy demands of the brain are high: they account for at least 20% of the body's energy consumption. Evolutionary studies indicate that the emergence of higher cognitive functions in humans is associated with an increased glucose utilization

  15. Ab initio derivation of model energy density functionals

    International Nuclear Information System (INIS)

    Dobaczewski, Jacek

    2016-01-01

    I propose a simple and manageable method that allows for deriving coupling constants of model energy density functionals (EDFs) directly from ab initio calculations performed for finite fermion systems. A proof-of-principle application allows for linking properties of finite nuclei, determined by using the nuclear nonlocal Gogny functional, to the coupling constants of the quasilocal Skyrme functional. The method does not rely on properties of infinite fermion systems but on the ab initio calculations in finite systems. It also allows for quantifying merits of different model EDFs in describing the ab initio results. (letter)

  16. Jump probabilities in the non-Markovian quantum jump method

    International Nuclear Information System (INIS)

    Haerkoenen, Kari

    2010-01-01

    The dynamics of a non-Markovian open quantum system described by a general time-local master equation is studied. The propagation of the density operator is constructed in terms of two processes: (i) deterministic evolution and (ii) evolution of a probability density functional in the projective Hilbert space. The analysis provides a derivation for the jump probabilities used in the recently developed non-Markovian quantum jump (NMQJ) method (Piilo et al 2008 Phys. Rev. Lett. 100 180402).

  17. Forecasting of future earthquakes in the northeast region of India considering energy released concept

    Science.gov (United States)

    Zarola, Amit; Sil, Arjun

    2018-04-01

    This study presents the forecasting of time and magnitude size of the next earthquake in the northeast India, using four probability distribution models (Gamma, Lognormal, Weibull and Log-logistic) considering updated earthquake catalog of magnitude Mw ≥ 6.0 that occurred from year 1737-2015 in the study area. On the basis of past seismicity of the region, two types of conditional probabilities have been estimated using their best fit model and respective model parameters. The first conditional probability is the probability of seismic energy (e × 1020 ergs), which is expected to release in the future earthquake, exceeding a certain level of seismic energy (E × 1020 ergs). And the second conditional probability is the probability of seismic energy (a × 1020 ergs/year), which is expected to release per year, exceeding a certain level of seismic energy per year (A × 1020 ergs/year). The logarithm likelihood functions (ln L) were also estimated for all four probability distribution models. A higher value of ln L suggests a better model and a lower value shows a worse model. The time of the future earthquake is forecasted by dividing the total seismic energy expected to release in the future earthquake with the total seismic energy expected to release per year. The epicentre of recently occurred 4 January 2016 Manipur earthquake (M 6.7), 13 April 2016 Myanmar earthquake (M 6.9) and the 24 August 2016 Myanmar earthquake (M 6.8) are located in zone Z.12, zone Z.16 and zone Z.15, respectively and that are the identified seismic source zones in the study area which show that the proposed techniques and models yield good forecasting accuracy.

  18. Characteristics of the probability function for three random-walk models of reaction--diffusion processes

    International Nuclear Information System (INIS)

    Musho, M.K.; Kozak, J.J.

    1984-01-01

    A method is presented for calculating exactly the relative width (sigma 2 )/sup 1/2// , the skewness γ 1 , and the kurtosis γ 2 characterizing the probability distribution function for three random-walk models of diffusion-controlled processes. For processes in which a diffusing coreactant A reacts irreversibly with a target molecule B situated at a reaction center, three models are considered. The first is the traditional one of an unbiased, nearest-neighbor random walk on a d-dimensional periodic/confining lattice with traps; the second involves the consideration of unbiased, non-nearest-neigh bor (i.e., variable-step length) walks on the same d-dimensional lattice; and, the third deals with the case of a biased, nearest-neighbor walk on a d-dimensional lattice (wherein a walker experiences a potential centered at the deep trap site of the lattice). Our method, which has been described in detail elsewhere [P.A. Politowicz and J. J. Kozak, Phys. Rev. B 28, 5549 (1983)] is based on the use of group theoretic arguments within the framework of the theory of finite Markov processes

  19. Parisian ruin probability for spectrally negative L\\'{e}vy processes

    OpenAIRE

    Ronnie Loeffen; Irmina Czarna; Zbigniew Palmowski

    2011-01-01

    In this note we give, for a spectrally negative Lévy process, a compact formula for the Parisian ruin probability, which is defined by the probability that the process exhibits an excursion below zero, with a length that exceeds a certain fixed period $r$. The formula involves only the scale function of the spectrally negative Lévy process and the distribution of the process at time $r$.

  20. Sustainable Energy Consumption in Northeast Asia: A Case from China’s Fuel Oil Futures Market

    Directory of Open Access Journals (Sweden)

    Chi Zhang

    2018-01-01

    Full Text Available The sustainable energy consumption in northeast Asia has a huge impact on regional stability and economic growth, which gives price volatility research in the energy market both theoretical value and practical application. We select China’s fuel oil futures market as a research subject and use recurrence interval analysis to investigate the price volatility pattern in different thresholds. We utilize the stretched exponential function to fit the pattern of the recurrence intervals of price fluctuations and find that the probability density functions of the recurrence intervals in different thresholds do not show the scaling behavior. Then the conditional probability density function and detrended fluctuation analysis prove that there is short-term and long-term correlation. Last, we use a hazard function to introduce the recurrence intervals into the (value at risk VaR calculation and establish a functional relationship between the mean recurrence interval and the threshold. Following this result, we also shed light on policy discussion for hedgers and government.

  1. Reinforcement Learning for Constrained Energy Trading Games With Incomplete Information.

    Science.gov (United States)

    Wang, Huiwei; Huang, Tingwen; Liao, Xiaofeng; Abu-Rub, Haitham; Chen, Guo

    2017-10-01

    This paper considers the problem of designing adaptive learning algorithms to seek the Nash equilibrium (NE) of the constrained energy trading game among individually strategic players with incomplete information. In this game, each player uses the learning automaton scheme to generate the action probability distribution based on his/her private information for maximizing his own averaged utility. It is shown that if one of admissible mixed-strategies converges to the NE with probability one, then the averaged utility and trading quantity almost surely converge to their expected ones, respectively. For the given discontinuous pricing function, the utility function has already been proved to be upper semicontinuous and payoff secure which guarantee the existence of the mixed-strategy NE. By the strict diagonal concavity of the regularized Lagrange function, the uniqueness of NE is also guaranteed. Finally, an adaptive learning algorithm is provided to generate the strategy probability distribution for seeking the mixed-strategy NE.

  2. The role of dual-energy computed tomography in the assessment of pulmonary function

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Hye Jeon [Department of Radiology, Hallym University College of Medicine, Hallym University Sacred Heart Hospital, 22, Gwanpyeong-ro 170beon-gil, Dongan-gu, Anyang-si, Gyeonggi-do 431-796 (Korea, Republic of); Hoffman, Eric A. [Departments of Radiology, Medicine, and Biomedical Engineering, University of Iowa, 200 Hawkins Dr, CC 701 GH, Iowa City, IA 52241 (United States); Lee, Chang Hyun; Goo, Jin Mo [Department of Radiology, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul 110-799 (Korea, Republic of); Levin, David L. [Department of Radiology, Mayo Clinic College of Medicine, 200 First Street, SW, Rochester, MN 55905 (United States); Kauczor, Hans-Ulrich [Diagnostic and Interventional Radiology, University Hospital Heidelberg, Im Neuenheimer Feld 400, 69120 Heidelberg (Germany); Translational Lung Research Center Heidelberg (TLRC), Member of the German Center for Lung Research (DZL), Im Neuenheimer Feld 400, 69120 Heidelberg (Germany); Seo, Joon Beom, E-mail: seojb@amc.seoul.kr [Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 388-1, Pungnap 2-dong, Songpa-ku, Seoul, 05505 (Korea, Republic of)

    2017-01-15

    Highlights: • The dual-energy CT technique enables the differentiation of contrast materials with material decomposition algorithm. • Pulmonary functional information can be evaluated using dual-energy CT with anatomic CT information, simultaneously. • Pulmonary functional information from dual-energy CT can improve diagnosis and severity assessment of diseases. - Abstract: The assessment of pulmonary function, including ventilation and perfusion status, is important in addition to the evaluation of structural changes of the lung parenchyma in various pulmonary diseases. The dual-energy computed tomography (DECT) technique can provide the pulmonary functional information and high resolution anatomic information simultaneously. The application of DECT for the evaluation of pulmonary function has been investigated in various pulmonary diseases, such as pulmonary embolism, asthma and chronic obstructive lung disease and so on. In this review article, we will present principles and technical aspects of DECT, along with clinical applications for the assessment pulmonary function in various lung diseases.

  3. Balance Function in High-Energy Collisions

    International Nuclear Information System (INIS)

    Tawfik, A.; Shalaby, Asmaa G.

    2015-01-01

    Aspects and implications of the balance functions (BF) in high-energy physics are reviewed. The various calculations and measurements depending on different quantities, for example, system size, collisions centrality, and beam energy, are discussed. First, the different definitions including advantages and even short-comings are highlighted. It is found that BF, which are mainly presented in terms of relative rapidity, and relative azimuthal and invariant relative momentum, are sensitive to the interaction centrality but not to the beam energy and can be used in estimating the hadronization time and the hadron-quark phase transition. Furthermore, the quark chemistry can be determined. The chemical evolution of the new-state-of-matter, the quark-gluon plasma, and its temporal-spatial evolution, femtoscopy of two-particle correlations, are accessible. The production time of positive-negative pair of charges can be determined from the widths of BF. Due to the reduction in the diffusion time, narrowed widths refer to delayed hadronization. It is concluded that BF are powerful tools characterizing hadron-quark phase transition and estimating some essential properties

  4. Event Discrimination Using Seismoacoustic Catalog Probabilities

    Science.gov (United States)

    Albert, S.; Arrowsmith, S.; Bowman, D.; Downey, N.; Koch, C.

    2017-12-01

    Presented here are three seismoacoustic catalogs from various years and locations throughout Utah and New Mexico. To create these catalogs, we combine seismic and acoustic events detected and located using different algorithms. Seismoacoustic events are formed based on similarity of origin time and location. Following seismoacoustic fusion, the data is compared against ground truth events. Each catalog contains events originating from both natural and anthropogenic sources. By creating these seismoacoustic catalogs, we show that the fusion of seismic and acoustic data leads to a better understanding of the nature of individual events. The probability of an event being a surface blast given its presence in each seismoacoustic catalog is quantified. We use these probabilities to discriminate between events from natural and anthropogenic sources. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525.

  5. Complex-energy approach to sum rules within nuclear density functional theory

    Science.gov (United States)

    Hinohara, Nobuo; Kortelainen, Markus; Nazarewicz, Witold; Olsen, Erik

    2015-04-01

    Background: The linear response of the nucleus to an external field contains unique information about the effective interaction, the correlations governing the behavior of the many-body system, and the properties of its excited states. To characterize the response, it is useful to use its energy-weighted moments, or sum rules. By comparing computed sum rules with experimental values, the information content of the response can be utilized in the optimization process of the nuclear Hamiltonian or the nuclear energy density functional (EDF). But the additional information comes at a price: compared to the ground state, computation of excited states is more demanding. Purpose: To establish an efficient framework to compute energy-weighted sum rules of the response that is adaptable to the optimization of the nuclear EDF and large-scale surveys of collective strength, we have developed a new technique within the complex-energy finite-amplitude method (FAM) based on the quasiparticle random-phase approximation (QRPA). Methods: To compute sum rules, we carry out contour integration of the response function in the complex-energy plane. We benchmark our results against the conventional matrix formulation of the QRPA theory, the Thouless theorem for the energy-weighted sum rule, and the dielectric theorem for the inverse-energy-weighted sum rule. Results: We derive the sum-rule expressions from the contour integration of the complex-energy FAM. We demonstrate that calculated sum-rule values agree with those obtained from the matrix formulation of the QRPA. We also discuss the applicability of both the Thouless theorem about the energy-weighted sum rule and the dielectric theorem for the inverse-energy-weighted sum rule to nuclear density functional theory in cases when the EDF is not based on a Hamiltonian. Conclusions: The proposed sum-rule technique based on the complex-energy FAM is a tool of choice when optimizing effective interactions or energy functionals. The method

  6. Generic Degraded Configuration Probability Analysis for the Codisposal Waste Package

    International Nuclear Information System (INIS)

    S.F.A. Deng; M. Saglam; L.J. Gratton

    2001-01-01

    In accordance with the technical work plan, ''Technical Work Plan For: Department of Energy Spent Nuclear Fuel Work Packages'' (CRWMS M and O 2000c), this Analysis/Model Report (AMR) is developed for the purpose of screening out degraded configurations for U.S. Department of Energy (DOE) spent nuclear fuel (SNF) types. It performs the degraded configuration parameter and probability evaluations of the overall methodology specified in the ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2000, Section 3) to qualifying configurations. Degradation analyses are performed to assess realizable parameter ranges and physical regimes for configurations. Probability calculations are then performed for configurations characterized by k eff in excess of the Critical Limit (CL). The scope of this document is to develop a generic set of screening criteria or models to screen out degraded configurations having potential for exceeding a criticality limit. The developed screening criteria include arguments based on physical/chemical processes and probability calculations and apply to DOE SNF types when codisposed with the high-level waste (HLW) glass inside a waste package. The degradation takes place inside the waste package and is long after repository licensing has expired. The emphasis of this AMR is on degraded configuration screening and the probability analysis is one of the approaches used for screening. The intended use of the model is to apply the developed screening criteria to each DOE SNF type following the completion of the degraded mode criticality analysis internal to the waste package

  7. Quantum Zeno and anti-Zeno effects measured by transition probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Wenxian, E-mail: wxzhang@whu.edu.cn [School of Physics and Technology, Wuhan University, Wuhan, Hubei 430072 (China); Department of Optical Science and Engineering, Fudan University, Shanghai 200433 (China); CEMS, RIKEN, Saitama 351-0198 (Japan); Kavli Institute for Theoretical Physics China, CAS, Beijing 100190 (China); Kofman, A.G. [CEMS, RIKEN, Saitama 351-0198 (Japan); Department of Physics, The University of Michigan, Ann Arbor, MI 48109-1040 (United States); Zhuang, Jun [Department of Optical Science and Engineering, Fudan University, Shanghai 200433 (China); You, J.Q. [Beijing Computational Science Research Center, Beijing 10084 (China); Department of Physics, Fudan University, Shanghai 200433 (China); CEMS, RIKEN, Saitama 351-0198 (Japan); Nori, Franco [CEMS, RIKEN, Saitama 351-0198 (Japan); Department of Physics, The University of Michigan, Ann Arbor, MI 48109-1040 (United States)

    2013-10-30

    Using numerical calculations, we compare the transition probabilities of many spins in random magnetic fields, subject to either frequent projective measurements, frequent phase modulations, or a mix of modulations and measurements. For various distribution functions, we find the transition probability under frequent modulations is suppressed most if the pulse delay is short and the evolution time is larger than a critical value. Furthermore, decay freezing occurs only under frequent modulations as the pulse delay approaches zero. In the large pulse-delay region, however, the transition probabilities under frequent modulations are highest among the three control methods.

  8. Comment on 'Kinetic energy as a density functional'

    International Nuclear Information System (INIS)

    Holas, A.; March, N.H.

    2002-01-01

    In a recent paper, Nesbet [Phys. Rev. A 65, 010502(R) (2001)] has proposed dropping ''the widespread but unjustified assumption that the existence of a ground-state density functional for the kinetic energy, T s [ρ], of an N-electron system implies the existence of a density-functional derivative, δT s [ρ]/δρ(r), equivalent to a local potential function,'' because, according to his arguments, this derivative 'has the mathematical character of a linear operator that acts on orbital wave functions'. Our Comment demonstrates that the statement called by Nesbet an 'unjustified assumption' happens, in fact, to be a rigorously proven theorem. Therefore, his previous conclusions stemming from his different view of this derivative, which undermined the foundations of density-functional theory, can be discounted

  9. Probability biases as Bayesian inference

    Directory of Open Access Journals (Sweden)

    Andre; C. R. Martins

    2006-11-01

    Full Text Available In this article, I will show how several observed biases in human probabilistic reasoning can be partially explained as good heuristics for making inferences in an environment where probabilities have uncertainties associated to them. Previous results show that the weight functions and the observed violations of coalescing and stochastic dominance can be understood from a Bayesian point of view. We will review those results and see that Bayesian methods should also be used as part of the explanation behind other known biases. That means that, although the observed errors are still errors under the be understood as adaptations to the solution of real life problems. Heuristics that allow fast evaluations and mimic a Bayesian inference would be an evolutionary advantage, since they would give us an efficient way of making decisions. %XX In that sense, it should be no surprise that humans reason with % probability as it has been observed.

  10. Janus-faced probability

    CERN Document Server

    Rocchi, Paolo

    2014-01-01

    The problem of probability interpretation was long overlooked before exploding in the 20th century, when the frequentist and subjectivist schools formalized two conflicting conceptions of probability. Beyond the radical followers of the two schools, a circle of pluralist thinkers tends to reconcile the opposing concepts. The author uses two theorems in order to prove that the various interpretations of probability do not come into opposition and can be used in different contexts. The goal here is to clarify the multifold nature of probability by means of a purely mathematical approach and to show how philosophical arguments can only serve to deepen actual intellectual contrasts. The book can be considered as one of the most important contributions in the analysis of probability interpretation in the last 10-15 years.

  11. On the low-energy behavior of the Adler function

    International Nuclear Information System (INIS)

    Nesterenko, A.V.

    2009-01-01

    The infrared behavior of the Adler function is examined by making use of a recently derived integral representation for the latter. The obtained result for the Adler function agrees with its experimental prediction in the entire energy range. The inclusive τ lepton decay is studied in the framework of the developed approach

  12. Of energy and the economy. Theory and evidence of their functional relationships

    Energy Technology Data Exchange (ETDEWEB)

    Chang, V.

    2007-07-01

    The author of the contribution under consideration offers a set of explicit functional relationships that link energy and the economy. Despite the reliance on energy permeating the whole economy, no such complete relationships had been presented before. The relevant questions are: (a) How related are energy and the economy? (b) What role does energy play in the economic growth? Under this aspect, the author theorizes the role of energy and then tests it with economic models, using data from 16 OECD countries from 1980 to 2001. The main results are the following: (a) Energy is a cross-country representative good whose prices are equalized when converted to a reference currency. Thus, energy prices satisfy the purchasing power parity. For all but one country, the half life of the real exchange rate is less than a year and as low as six months, shorter than those derived by other real exchange rate measures; (b) Considering energy a cross-time representative good, a country's utility function is inversely proportional to both its income share of energy and its energy price. The author obtains an explicit, unified two-dimensional (cross countries and time) production function with energy and non-energy as the two inputs; (c) The author concludes a cross-country parity relationship for income shares of energy, similar to that for energy prices. Furthermore, the author provides an intertemporal connection between the trajectory of the income share of energy and the productivity growth of the economy; (d) The author demonstrates the tradeoffs between energy efficiency and economic wellbeing, with the energy price being the medium of the tradeoffs.

  13. Impact of MCNP unresolved resonance probability-table treatment on uranium and plutonium benchmarks

    International Nuclear Information System (INIS)

    Mosteller, R.D.; Little, R.C.

    1998-01-01

    Versions of MCNP up through and including 4B have not accurately modeled neutron self-shielding effects in the unresolved resonance energy region. Recently, a probability-table treatment has been incorporated into a developmental version of MCNP. This paper presents MCNP results for a variety of uranium and plutonium critical benchmarks, calculated with and without the probability-table treatment

  14. Prediction of accident sequence probabilities in a nuclear power plant due to earthquake events

    International Nuclear Information System (INIS)

    Hudson, J.M.; Collins, J.D.

    1980-01-01

    This paper presents a methodology to predict accident probabilities in nuclear power plants subject to earthquakes. The resulting computer program accesses response data to compute component failure probabilities using fragility functions. Using logical failure definitions for systems, and the calculated component failure probabilities, initiating event and safety system failure probabilities are synthesized. The incorporation of accident sequence expressions allows the calculation of terminal event probabilities. Accident sequences, with their occurrence probabilities, are finally coupled to a specific release category. A unique aspect of the methodology is an analytical procedure for calculating top event probabilities based on the correlated failure of primary events

  15. Probability, Statistics, and Stochastic Processes

    CERN Document Server

    Olofsson, Peter

    2012-01-01

    This book provides a unique and balanced approach to probability, statistics, and stochastic processes.   Readers gain a solid foundation in all three fields that serves as a stepping stone to more advanced investigations into each area.  The Second Edition features new coverage of analysis of variance (ANOVA), consistency and efficiency of estimators, asymptotic theory for maximum likelihood estimators, empirical distribution function and the Kolmogorov-Smirnov test, general linear models, multiple comparisons, Markov chain Monte Carlo (MCMC), Brownian motion, martingales, and

  16. Interpretations of probability

    CERN Document Server

    Khrennikov, Andrei

    2009-01-01

    This is the first fundamental book devoted to non-Kolmogorov probability models. It provides a mathematical theory of negative probabilities, with numerous applications to quantum physics, information theory, complexity, biology and psychology. The book also presents an interesting model of cognitive information reality with flows of information probabilities, describing the process of thinking, social, and psychological phenomena.

  17. Rydberg energies using excited state density functional theory

    International Nuclear Information System (INIS)

    Cheng, C.-L.; Wu Qin; Van Voorhis, Troy

    2008-01-01

    We utilize excited state density functional theory (eDFT) to study Rydberg states in atoms. We show both analytically and numerically that semilocal functionals can give quite reasonable Rydberg energies from eDFT, even in cases where time dependent density functional theory (TDDFT) fails catastrophically. We trace these findings to the fact that in eDFT the Kohn-Sham potential for each state is computed using the appropriate excited state density. Unlike the ground state potential, which typically falls off exponentially, the sequence of excited state potentials has a component that falls off polynomially with distance, leading to a Rydberg-type series. We also address the rigorous basis of eDFT for these systems. Perdew and Levy have shown using the constrained search formalism that every stationary density corresponds, in principle, to an exact stationary state of the full many-body Hamiltonian. In the present context, this means that the excited state DFT solutions are rigorous as long as they deliver the minimum noninteracting kinetic energy for the given density. We use optimized effective potential techniques to show that, in some cases, the eDFT Rydberg solutions appear to deliver the minimum kinetic energy because the associated density is not pure state v-representable. We thus find that eDFT plays a complementary role to constrained DFT: The former works only if the excited state density is not the ground state of some potential while the latter applies only when the density is a ground state density.

  18. Stochastic chaos induced by diffusion processes with identical spectral density but different probability density functions.

    Science.gov (United States)

    Lei, Youming; Zheng, Fan

    2016-12-01

    Stochastic chaos induced by diffusion processes, with identical spectral density but different probability density functions (PDFs), is investigated in selected lightly damped Hamiltonian systems. The threshold amplitude of diffusion processes for the onset of chaos is derived by using the stochastic Melnikov method together with a mean-square criterion. Two quasi-Hamiltonian systems, namely, a damped single pendulum and damped Duffing oscillator perturbed by stochastic excitations, are used as illustrative examples. Four different cases of stochastic processes are taking as the driving excitations. It is shown that in such two systems the spectral density of diffusion processes completely determines the threshold amplitude for chaos, regardless of the shape of their PDFs, Gaussian or otherwise. Furthermore, the mean top Lyapunov exponent is employed to verify analytical results. The results obtained by numerical simulations are in accordance with the analytical results. This demonstrates that the stochastic Melnikov method is effective in predicting the onset of chaos in the quasi-Hamiltonian systems.

  19. On the probability density interpretation of smoothed Wigner functions

    International Nuclear Information System (INIS)

    De Aguiar, M.A.M.; Ozorio de Almeida, A.M.

    1990-01-01

    It has been conjectured that the averages of the Wigner function over phase space volumes, larger than those of minimum uncertainty, are always positive. This is true for Gaussian averaging, so that the Husimi distribution is positive. However, we provide a specific counterexample for the averaging with a discontinuous hat function. The analysis of the specific system of a one-dimensional particle in a box also elucidates the respective advantages of the Wigner and the Husimi functions for the study of the semiclassical limit. The falsification of the averaging conjecture is shown not to depend on the discontinuities of the hat function, by considering the latter as the limit of a sequence of analytic functions. (author)

  20. Degenerate RS perturbation theory. [Rayleigh-Schroedinger energies and wave functions

    Science.gov (United States)

    Hirschfelder, J. O.; Certain, P. R.

    1974-01-01

    A concise, systematic procedure is given for determining the Rayleigh-Schroedinger energies and wave functions of degenerate states to arbitrarily high orders even when the degeneracies of the various states are resolved in arbitrary orders. The procedure is expressed in terms of an iterative cycle in which the energy through the (2n + 1)-th order is expressed in terms of the partially determined wave function through the n-th order. Both a direct and an operator derivation are given. The two approaches are equivalent and can be transcribed into each other. The direct approach deals with the wave functions (without the use of formal operators) and has the advantage that it resembles the usual treatment of nondegenerate perturbations and maintains close contact with the basic physics. In the operator approach, the wave functions are expressed in terms of infinite-order operators which are determined by the successive resolution of the space of the zeroth-order functions.

  1. Probability theory and statistical applications a profound treatise for self-study

    CERN Document Server

    Zörnig, Peter

    2016-01-01

    This accessible and easy-to-read book provides many examples to illustrate diverse topics in probability and statistics, from initial concepts up to advanced calculations. Special attention is devoted e.g. to independency of events, inequalities in probability and functions of random variables. The book is directed to students of mathematics, statistics, engineering, and other quantitative sciences.

  2. Probabilities for gravitational lensing by point masses in a locally inhomogeneous universe

    International Nuclear Information System (INIS)

    Isaacson, J.A.; Canizares, C.R.

    1989-01-01

    Probability functions for gravitational lensing by point masses that incorporate Poisson statistics and flux conservation are formulated in the Dyer-Roeder construction. Optical depths to lensing for distant sources are calculated using both the method of Press and Gunn (1973) which counts lenses in an otherwise empty cone, and the method of Ehlers and Schneider (1986) which projects lensing cross sections onto the source sphere. These are then used as parameters of the probability density for lensing in the case of a critical (q0 = 1/2) Friedmann universe. A comparison of the probability functions indicates that the effects of angle-averaging can be well approximated by adjusting the average magnification along a random line of sight so as to conserve flux. 17 references

  3. Correlation energy functional within the GW -RPA: Exact forms, approximate forms, and challenges

    Science.gov (United States)

    Ismail-Beigi, Sohrab

    2010-05-01

    In principle, the Luttinger-Ward Green’s-function formalism allows one to compute simultaneously the total energy and the quasiparticle band structure of a many-body electronic system from first principles. We present approximate and exact expressions for the correlation energy within the GW -random-phase approximation that are more amenable to computation and allow for developing efficient approximations to the self-energy operator and correlation energy. The exact form is a sum over differences between plasmon and interband energies. The approximate forms are based on summing over screened interband transitions. We also demonstrate that blind extremization of such functionals leads to unphysical results: imposing physical constraints on the allowed solutions (Green’s functions) is necessary. Finally, we present some relevant numerical results for atomic systems.

  4. Gravity and count probabilities in an expanding universe

    Science.gov (United States)

    Bouchet, Francois R.; Hernquist, Lars

    1992-01-01

    The time evolution of nonlinear clustering on large scales in cold dark matter, hot dark matter, and white noise models of the universe is investigated using N-body simulations performed with a tree code. Count probabilities in cubic cells are determined as functions of the cell size and the clustering state (redshift), and comparisons are made with various theoretical models. We isolate the features that appear to be the result of gravitational instability, those that depend on the initial conditions, and those that are likely a consequence of numerical limitations. More specifically, we study the development of skewness, kurtosis, and the fifth moment in relation to variance, the dependence of the void probability on time as well as on sparseness of sampling, and the overall shape of the count probability distribution. Implications of our results for theoretical and observational studies are discussed.

  5. A stochastic-bayesian model for the fracture probability of PWR pressure vessels

    Energy Technology Data Exchange (ETDEWEB)

    Francisco, Alexandre S.; Duran, Jorge Alberto R., E-mail: afrancisco@metal.eeimvr.uff.br, E-mail: duran@metal.eeimvr.uff.br [Universidade Federal Fluminense (UFF), Volta Redonda, RJ (Brazil). Dept. de Engenharia Mecanica

    2013-07-01

    Fracture probability of pressure vessels containing cracks can be obtained by methodologies of easy understanding, which require a deterministic treatment, complemented by statistical methods. However, more accurate results are required, methodologies need to be better formulated. This paper presents a new methodology to address this problem. First, a more rigorous methodology is obtained by means of the relationship of probability distributions that model crack incidence and nondestructive inspection efficiency using the Bayes' theorem. The result is an updated crack incidence distribution. Further, the accuracy of the methodology is improved by using a stochastic model for the crack growth. The stochastic model incorporates the statistical variability of the crack growth process, combining the stochastic theory with experimental data. Stochastic differential equations are derived by the randomization of empirical equations. From the solution of this equation, a distribution function related to the crack growth is derived. The fracture probability using both probability distribution functions is in agreement with theory, and presents realistic value for pressure vessels. (author)

  6. A stochastic-bayesian model for the fracture probability of PWR pressure vessels

    International Nuclear Information System (INIS)

    Francisco, Alexandre S.; Duran, Jorge Alberto R.

    2013-01-01

    Fracture probability of pressure vessels containing cracks can be obtained by methodologies of easy understanding, which require a deterministic treatment, complemented by statistical methods. However, more accurate results are required, methodologies need to be better formulated. This paper presents a new methodology to address this problem. First, a more rigorous methodology is obtained by means of the relationship of probability distributions that model crack incidence and nondestructive inspection efficiency using the Bayes' theorem. The result is an updated crack incidence distribution. Further, the accuracy of the methodology is improved by using a stochastic model for the crack growth. The stochastic model incorporates the statistical variability of the crack growth process, combining the stochastic theory with experimental data. Stochastic differential equations are derived by the randomization of empirical equations. From the solution of this equation, a distribution function related to the crack growth is derived. The fracture probability using both probability distribution functions is in agreement with theory, and presents realistic value for pressure vessels. (author)

  7. Prestack inversion based on anisotropic Markov random field-maximum posterior probability inversion and its application to identify shale gas sweet spots

    Science.gov (United States)

    Wang, Kang-Ning; Sun, Zan-Dong; Dong, Ning

    2015-12-01

    Economic shale gas production requires hydraulic fracture stimulation to increase the formation permeability. Hydraulic fracturing strongly depends on geomechanical parameters such as Young's modulus and Poisson's ratio. Fracture-prone sweet spots can be predicted by prestack inversion, which is an ill-posed problem; thus, regularization is needed to obtain unique and stable solutions. To characterize gas-bearing shale sedimentary bodies, elastic parameter variations are regarded as an anisotropic Markov random field. Bayesian statistics are adopted for transforming prestack inversion to the maximum posterior probability. Two energy functions for the lateral and vertical directions are used to describe the distribution, and the expectation-maximization algorithm is used to estimate the hyperparameters of the prior probability of elastic parameters. Finally, the inversion yields clear geological boundaries, high vertical resolution, and reasonable lateral continuity using the conjugate gradient method to minimize the objective function. Antinoise and imaging ability of the method were tested using synthetic and real data.

  8. Influence of the Probability Level on the Framing Effect

    Directory of Open Access Journals (Sweden)

    Kaja Damnjanovic

    2016-11-01

    Full Text Available Research of the framing effect of risky choice mostly applies to the tasks where the effect of only one probability or risk level on the choice of non-risky or risky options was examined. The conducted research was aimed to examine the framing effect in the function of probability level in the outcome of a risk option in three decision-making domains: health, money and human lives. It has been confirmed that the decision-making domain moderates the framing effect. In the monetary domain, the general risk aversion has been confirmed as registered in earlier research. At high probability levels, the framing effect is registered in both frames, while no framing effect is registered at lower probability levels. In the domain of decision-making about human lives, the framing effect is registered at medium high and medium low probability levels. In the domain of decision-making about health, the framing effect is registered almost in the entire probability range while this domain differs from the former two. The results show that the attitude to risk is not identical at different probability levels, that the dynamics of the attitude to risk influences the framing effect, and that the framing effect pattern is different in different decision-making domains. In other words, linguistic manipulation representing the frame in the tasks affects the change in the preference order only when the possibility of gain (expressed in probability is estimated as sufficiently high.

  9. Functionally graded biomimetic energy absorption concept development for transportation systems.

    Science.gov (United States)

    2014-02-01

    Mechanics of a functionally graded cylinder subject to static or dynamic axial loading is considered, including a potential application as energy absorber. The mass density and stiffness are power functions of the radial coordinate as may be the case...

  10. Image Fusion Based on the \\({\\Delta ^{ - 1}} - T{V_0}\\ Energy Function

    Directory of Open Access Journals (Sweden)

    Qiwei Xie

    2014-11-01

    Full Text Available This article proposes a \\({\\Delta^{-1}}-T{V_0}\\ energy function to fuse a multi-spectral image with a panchromatic image. The proposed energy function consists of two components, a \\(TV_0\\ component and a \\(\\Delta^{-1}\\ component. The \\(TV_0\\ term uses the sparse priority to increase the detailed spatial information; while the \\({\\Delta ^{ - 1}}\\ term removes the block effect of the multi-spectral image. Furthermore, as the proposed energy function is non-convex, we also adopt an alternative minimization algorithm and the \\(L_0\\ gradient minimization to solve it. Experimental results demonstrate the improved performance of the proposed method over existing methods.

  11. Experimental parameterization of an energy function for the simulation of unfolded proteins

    DEFF Research Database (Denmark)

    Norgaard, A.B.; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, K.

    2008-01-01

    The determination of conformational preferences in unfolded and disordered proteins is an important challenge in structural biology. We here describe an algorithm to optimize energy functions for the simulation of unfolded proteins. The procedure is based on the maximum likelihood principle and e...... and can be applied to a range of experimental data and energy functions including the force fields used in molecular dynamics simulations.......The determination of conformational preferences in unfolded and disordered proteins is an important challenge in structural biology. We here describe an algorithm to optimize energy functions for the simulation of unfolded proteins. The procedure is based on the maximum likelihood principle...

  12. Energy perspectives of France by 2020-2050. Technological evolutions

    International Nuclear Information System (INIS)

    2007-09-01

    The different technologies in phase of research and development and concerning the energy production or storage, are examined and presented in function of their probability of emergence at the industrial level: the projects which are going to appear in planed time on the market, the projects based on known technologies which should appear but at non predicted date and the possible projects but based on a new technology. The different type of energy, from the fossil fuels to the renewable energies are concerned. (A.L.B.)

  13. Protein distance constraints predicted by neural networks and probability density functions

    DEFF Research Database (Denmark)

    Lund, Ole; Frimand, Kenneth; Gorodkin, Jan

    1997-01-01

    We predict interatomic C-α distances by two independent data driven methods. The first method uses statistically derived probability distributions of the pairwise distance between two amino acids, whilst the latter method consists of a neural network prediction approach equipped with windows taki...... method based on the predicted distances is presented. A homepage with software, predictions and data related to this paper is available at http://www.cbs.dtu.dk/services/CPHmodels/...

  14. N-Level Quantum Systems and Legendre Functions

    OpenAIRE

    Mazurenko, A. S.; Savva, V. A.

    2001-01-01

    An excitation dynamics of new quantum systems of N equidistant energy levels in a monochromatic field has been investigated. To obtain exact analytical solutions of dynamic equations an analytical method based on orthogonal functions of a real argument has been proposed. Using the orthogonal Legendre functions we have found an exact analytical expression for a population probability amplitude of the level n. Various initial conditions for the excitation of N-level quantum systems have been co...

  15. Density functional theory calculations of the lowest energy quintet and triplet states of model hemes: role of functional, basis set, and zero-point energy corrections.

    Science.gov (United States)

    Khvostichenko, Daria; Choi, Andrew; Boulatov, Roman

    2008-04-24

    We investigated the effect of several computational variables, including the choice of the basis set, application of symmetry constraints, and zero-point energy (ZPE) corrections, on the structural parameters and predicted ground electronic state of model 5-coordinate hemes (iron(II) porphines axially coordinated by a single imidazole or 2-methylimidazole). We studied the performance of B3LYP and B3PW91 with eight Pople-style basis sets (up to 6-311+G*) and B97-1, OLYP, and TPSS functionals with 6-31G and 6-31G* basis sets. Only hybrid functionals B3LYP, B3PW91, and B97-1 reproduced the quintet ground state of the model hemes. With a given functional, the choice of the basis set caused up to 2.7 kcal/mol variation of the quintet-triplet electronic energy gap (DeltaEel), in several cases, resulting in the inversion of the sign of DeltaEel. Single-point energy calculations with triple-zeta basis sets of the Pople (up to 6-311G++(2d,2p)), Ahlrichs (TZVP and TZVPP), and Dunning (cc-pVTZ) families showed the same trend. The zero-point energy of the quintet state was approximately 1 kcal/mol lower than that of the triplet, and accounting for ZPE corrections was crucial for establishing the ground state if the electronic energy of the triplet state was approximately 1 kcal/mol less than that of the quintet. Within a given model chemistry, effects of symmetry constraints and of a "tense" structure of the iron porphine fragment coordinated to 2-methylimidazole on DeltaEel were limited to 0.3 kcal/mol. For both model hemes the best agreement with crystallographic structural data was achieved with small 6-31G and 6-31G* basis sets. Deviation of the computed frequency of the Fe-Im stretching mode from the experimental value with the basis set decreased in the order: nonaugmented basis sets, basis sets with polarization functions, and basis sets with polarization and diffuse functions. Contraction of Pople-style basis sets (double-zeta or triple-zeta) affected the results

  16. Analytical potential energy function for the Br + H2 system

    International Nuclear Information System (INIS)

    Kurosaki, Yuzuru

    2001-01-01

    Analytical functions with a many-body expansion for the ground and first-excited-state potential energy surfaces for the Br+H 2 system are newly presented in this work. These functions describe the abstraction and exchange reactions qualitatively well, although it has been found that the function for the ground-state potential surface is still quantitatively unsatisfactory. (author)

  17. Bounds for Tail Probabilities of the Sample Variance

    Directory of Open Access Journals (Sweden)

    Van Zuijlen M

    2009-01-01

    Full Text Available We provide bounds for tail probabilities of the sample variance. The bounds are expressed in terms of Hoeffding functions and are the sharpest known. They are designed having in mind applications in auditing as well as in processing data related to environment.

  18. Equation satisfied by electron-electron mutual Coulomb repulsion energy density functional

    OpenAIRE

    Joubert, Daniel P.

    2011-01-01

    The electron-electron mutual Coulomb repulsion energy density functional satisfies an equation that links functionals and functional derivatives at N-electron and (N-1)-electron densities for densities determined from the same adiabatic scaled external potential for the N-electron system.

  19. Flux continuity and probability conservation in complexified Bohmian mechanics

    International Nuclear Information System (INIS)

    Poirier, Bill

    2008-01-01

    Recent years have seen increased interest in complexified Bohmian mechanical trajectory calculations for quantum systems as both a pedagogical and computational tool. In the latter context, it is essential that trajectories satisfy probability conservation to ensure they are always guided to where they are most needed. We consider probability conservation for complexified Bohmian trajectories. The analysis relies on time-reversal symmetry considerations, leading to a generalized expression for the conjugation of wave functions of complexified variables. This in turn enables meaningful discussion of complexified flux continuity, which turns out not to be satisfied in general, though a related property is found to be true. The main conclusion, though, is that even under a weak interpretation, probability is not conserved along complex Bohmian trajectories

  20. Dimensional oscillation. A fast variation of energy embedding gives good results with the AMBER potential energy function.

    Science.gov (United States)

    Snow, M E; Crippen, G M

    1991-08-01

    The structure of the AMBER potential energy surface of the cyclic tetrapeptide cyclotetrasarcosyl is analyzed as a function of the dimensionality of coordinate space. It is found that the number of local energy minima decreases as the dimensionality of the space increases until some limit at which point equipotential subspaces appear. The applicability of energy embedding methods to finding global energy minima in this type of energy-conformation space is explored. Dimensional oscillation, a computationally fast variant of energy embedding is introduced and found to sample conformation space widely and to do a good job of finding global and near-global energy minima.

  1. Gap probability - Measurements and models of a pecan orchard

    Science.gov (United States)

    Strahler, Alan H.; Li, Xiaowen; Moody, Aaron; Liu, YI

    1992-01-01

    Measurements and models are compared for gap probability in a pecan orchard. Measurements are based on panoramic photographs of 50* by 135 view angle made under the canopy looking upwards at regular positions along transects between orchard trees. The gap probability model is driven by geometric parameters at two levels-crown and leaf. Crown level parameters include the shape of the crown envelope and spacing of crowns; leaf level parameters include leaf size and shape, leaf area index, and leaf angle, all as functions of canopy position.

  2. Influence of the Probability Level on the Framing Effect

    OpenAIRE

    Kaja Damnjanovic; Vasilije Gvozdenovic

    2016-01-01

    Research of the framing effect of risky choice mostly applies to the tasks where the effect of only one probability or risk level on the choice of non-risky or risky options was examined. The conducted research was aimed to examine the framing effect in the function of probability level in the outcome of a risk option in three decision-making domains: health, money and human lives. It has been confirmed that the decision-making domain moderates the framing effect. In the monetary domain, the ...

  3. A scenario analysis of future energy systems based on an energy flow model represented as functionals of technology options

    International Nuclear Information System (INIS)

    Kikuchi, Yasunori; Kimura, Seiichiro; Okamoto, Yoshitaka; Koyama, Michihisa

    2014-01-01

    Highlights: • Energy flow model was represented as the functionals of technology options. • Relationships among available technologies can be visualized by developed model. • Technology roadmapping can be incorporated into the model as technical scenario. • Combination of technologies can increase their contribution to the environment. - Abstract: The design of energy systems has become an issue all over the world. A single optimal system cannot be suggested because the availability of infrastructure and resources and the acceptability of the system should be discussed locally, involving all related stakeholders in the energy system. In particular, researchers and engineers of technologies related to energy systems should be able to perform the forecasting and roadmapping of future energy systems and indicate quantitative results of scenario analyses. We report an energy flow model developed for analysing scenarios of future Japanese energy systems implementing a variety of feasible technology options. The model was modularized and represented as functionals of appropriate technology options, which enables the aggregation and disaggregation of energy systems by defining functionals for single technologies, packages integrating multi-technologies, and mini-systems such as regions implementing industrial symbiosis. Based on the model, the combinations of technologies on both energy supply and demand sides can be addressed considering not only the societal scenarios such as resource prices, economic growth and population change but also the technical scenarios including the development and penetration of energy-related technologies such as distributed solid oxide fuel cells in residential sectors and new-generation vehicles, and the replacement and shift of current technologies such as heat pumps for air conditioning and centralized power generation. The developed model consists of two main modules; namely, a power generation dispatching module for the

  4. Energy level alignment and quantum conductance of functionalized metal-molecule junctions

    DEFF Research Database (Denmark)

    Jin, Chengjun; Strange, Mikkel; Markussen, Troels

    2013-01-01

    We study the effect of functional groups (CH3*4, OCH3, CH3, Cl, CN, F*4) on the electronic transport properties of 1,4-benzenediamine molecular junctions using the non-equilibrium Green function method. Exchange and correlation effects are included at various levels of theory, namely density...... functional theory (DFT), energy level-corrected DFT (DFT+Σ), Hartree-Fock and the many-body GW approximation. All methods reproduce the expected trends for the energy of the frontier orbitals according to the electron donating or withdrawing character of the substituent group. However, only the GW method...... predicts the correct ordering of the conductance amongst the molecules. The absolute GW (DFT) conductance is within a factor of two (three) of the experimental values. Correcting the DFT orbital energies by a simple physically motivated scissors operator, Σ, can bring the DFT conductances close...

  5. On approximation and energy estimates for delta 6-convex functions.

    Science.gov (United States)

    Saleem, Muhammad Shoaib; Pečarić, Josip; Rehman, Nasir; Khan, Muhammad Wahab; Zahoor, Muhammad Sajid

    2018-01-01

    The smooth approximation and weighted energy estimates for delta 6-convex functions are derived in this research. Moreover, we conclude that if 6-convex functions are closed in uniform norm, then their third derivatives are closed in weighted [Formula: see text]-norm.

  6. Methods for estimating drought streamflow probabilities for Virginia streams

    Science.gov (United States)

    Austin, Samuel H.

    2014-01-01

    Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.

  7. Improving Ranking Using Quantum Probability

    OpenAIRE

    Melucci, Massimo

    2011-01-01

    The paper shows that ranking information units by quantum probability differs from ranking them by classical probability provided the same data used for parameter estimation. As probability of detection (also known as recall or power) and probability of false alarm (also known as fallout or size) measure the quality of ranking, we point out and show that ranking by quantum probability yields higher probability of detection than ranking by classical probability provided a given probability of ...

  8. Magnetic field effects on the quantum wire energy spectrum and Green's function

    International Nuclear Information System (INIS)

    Morgenstern Horing, Norman J.

    2010-01-01

    We analyze the energy spectrum and propagation of electrons in a quantum wire on a 2D host medium in a normal magnetic field, representing the wire by a 1D Dirac delta function potential which would support just a single subband state in the absence of the magnetic field. The associated Schroedinger Green's function for the quantum wire is derived in closed form in terms of known functions and the Landau quantized subband energy spectrum is examined.

  9. Probability density fittings of corrosion test-data: Implications on ...

    Indian Academy of Sciences (India)

    Steel-reinforced concrete; probability distribution functions; corrosion ... to be present in the corrosive system at a suitable concentration (Holoway et al 2004; Söylev & ..... voltage, equivalent to voltage drop, across a resistor divided by the ...

  10. KIDS Nuclear Energy Density Functional: 1st Application in Nuclei

    Science.gov (United States)

    Gil, Hana; Papakonstantinou, Panagiota; Hyun, Chang Ho; Oh, Yongseok

    We apply the KIDS (Korea: IBS-Daegu-Sungkyunkwan) nuclear energy density functional model, which is based on the Fermi momentum expansion, to the study of properties of lj-closed nuclei. The parameters of the model are determined by the nuclear properties at the saturation density and theoretical calculations on pure neutron matter. For applying the model to the study of nuclei, we rely on the Skyrme force model, where the Skyrme force parameters are determined through the KIDS energy density functional. Solving Hartree-Fock equations, we obtain the energies per particle and charge radii of closed magic nuclei, namely, 16O, 28O, 40Ca, 48Ca, 60Ca, 90Zr, 132Sn, and 208Pb. The results are compared with the observed data and further improvement of the model is shortly mentioned.

  11. Generic Degraded Congiguration Probability Analysis for DOE Codisposal Waste Package

    Energy Technology Data Exchange (ETDEWEB)

    S.F.A. Deng; M. Saglam; L.J. Gratton

    2001-05-23

    In accordance with the technical work plan, ''Technical Work Plan For: Department of Energy Spent Nuclear Fuel Work Packages'' (CRWMS M&O 2000c), this Analysis/Model Report (AMR) is developed for the purpose of screening out degraded configurations for U.S. Department of Energy (DOE) spent nuclear fuel (SNF) types. It performs the degraded configuration parameter and probability evaluations of the overall methodology specified in the ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2000, Section 3) to qualifying configurations. Degradation analyses are performed to assess realizable parameter ranges and physical regimes for configurations. Probability calculations are then performed for configurations characterized by k{sub eff} in excess of the Critical Limit (CL). The scope of this document is to develop a generic set of screening criteria or models to screen out degraded configurations having potential for exceeding a criticality limit. The developed screening criteria include arguments based on physical/chemical processes and probability calculations and apply to DOE SNF types when codisposed with the high-level waste (HLW) glass inside a waste package. The degradation takes place inside the waste package and is long after repository licensing has expired. The emphasis of this AMR is on degraded configuration screening and the probability analysis is one of the approaches used for screening. The intended use of the model is to apply the developed screening criteria to each DOE SNF type following the completion of the degraded mode criticality analysis internal to the waste package.

  12. Gamma-Ray interaction probabilities for some liquid scintillators

    International Nuclear Information System (INIS)

    Garcia-Torano Martinez, E.; Grau Malonda, A.

    1989-01-01

    Interaction probabilities for 17 gamma-Ray energies between 1 and 1.000 KeV have been computed and tabulated. The tables may be applied to the case of cylindrical vials with radius 1,25 cm and volumes 5, 10 and 15 ml. Toluene, Toluene/Alcohol, Dioxane-Naftalene, PCS, INSTAGEL and HISAFE II scintillators are considered. Graphical results for 10 ml are also given. (Author)

  13. Free energy functionals for polarization fluctuations: Pekar factor revisited.

    Science.gov (United States)

    Dinpajooh, Mohammadhasan; Newton, Marshall D; Matyushov, Dmitry V

    2017-02-14

    The separation of slow nuclear and fast electronic polarization in problems related to electron mobility in polarizable media was considered by Pekar 70 years ago. Within dielectric continuum models, this separation leads to the Pekar factor in the free energy of solvation by the nuclear degrees of freedom. The main qualitative prediction of Pekar's perspective is a significant, by about a factor of two, drop of the nuclear solvation free energy compared to the total (electronic plus nuclear) free energy of solvation. The Pekar factor enters the solvent reorganization energy of electron transfer reactions and is a significant mechanistic parameter accounting for the solvent effect on electron transfer. Here, we study the separation of the fast and slow polarization modes in polar molecular liquids (polarizable dipolar liquids and polarizable water force fields) without relying on the continuum approximation. We derive the nonlocal free energy functional and use atomistic numerical simulations to obtain nonlocal, reciprocal space electronic and nuclear susceptibilities. A consistent transition to the continuum limit is introduced by extrapolating the results of finite-size numerical simulation to zero wavevector. The continuum nuclear susceptibility extracted from the simulations is numerically close to the Pekar factor. However, we derive a new functionality involving the static and high-frequency dielectric constants. The main distinction of our approach from the traditional theories is found in the solvation free energy due to the nuclear polarization: the anticipated significant drop of its magnitude with increasing liquid polarizability does not occur. The reorganization energy of electron transfer is either nearly constant with increasing the solvent polarizability and the corresponding high-frequency dielectric constant (polarizable dipolar liquids) or actually noticeably increases (polarizable force fields of water).

  14. Free energy functionals for polarization fluctuations: Pekar factor revisited

    International Nuclear Information System (INIS)

    Dinpajooh, Mohammadhasan; Newton, Marshall D.; Matyushov, Dmitry V.

    2017-01-01

    The separation of slow nuclear and fast electronic polarization in problems related to electron mobility in polarizable media was considered by Pekar 70 years ago. This separation leads to the Pekar factor in the free energy of solvation by the nuclear degrees of freedom, within dielectric continuum models. The main qualitative prediction of Pekar’s perspective is a significant, by about a factor of two, drop of the nuclear solvation free energy compared to the total (electronic plus nuclear) free energy of solvation. The Pekar factor enters the solvent reorganization energy of electron transfer reactions and is a significant mechanistic parameter accounting for the solvent effect on electron transfer. We study the separation of the fast and slow polarization modes in polar molecular liquids (polarizable dipolar liquids and polarizable water force fields) without relying on the continuum approximation. We derive the nonlocal free energy functional and use atomistic numerical simulations to obtain nonlocal, reciprocal space electronic and nuclear susceptibilities. A consistent transition to the continuum limit is introduced by extrapolating the results of finite-size numerical simulation to zero wavevector. The continuum nuclear susceptibility extracted from the simulations is numerically close to the Pekar factor. But, we derive a new functionality involving the static and high-frequency dielectric constants. The main distinction of our approach from the traditional theories is found in the solvation free energy due to the nuclear polarization: the anticipated significant drop of its magnitude with increasing liquid polarizability does not occur. The reorganization energy of electron transfer is either nearly constant with increasing the solvent polarizability and the corresponding high-frequency dielectric constant (polarizable dipolar liquids) or actually noticeably increases (polarizable force fields of water).

  15. Compositional cokriging for mapping the probability risk of groundwater contamination by nitrates.

    Science.gov (United States)

    Pardo-Igúzquiza, Eulogio; Chica-Olmo, Mario; Luque-Espinar, Juan A; Rodríguez-Galiano, Víctor

    2015-11-01

    Contamination by nitrates is an important cause of groundwater pollution and represents a potential risk to human health. Management decisions must be made using probability maps that assess the nitrate concentration potential of exceeding regulatory thresholds. However these maps are obtained with only a small number of sparse monitoring locations where the nitrate concentrations have been measured. It is therefore of great interest to have an efficient methodology for obtaining those probability maps. In this paper, we make use of the fact that the discrete probability density function is a compositional variable. The spatial discrete probability density function is estimated by compositional cokriging. There are several advantages in using this approach: (i) problems of classical indicator cokriging, like estimates outside the interval (0,1) and order relations, are avoided; (ii) secondary variables (e.g. aquifer parameters) can be included in the estimation of the probability maps; (iii) uncertainty maps of the probability maps can be obtained; (iv) finally there are modelling advantages because the variograms and cross-variograms of real variables that do not have the restrictions of indicator variograms and indicator cross-variograms. The methodology was applied to the Vega de Granada aquifer in Southern Spain and the advantages of the compositional cokriging approach were demonstrated. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Calculating the Probability of Returning a Loan with Binary Probability Models

    Directory of Open Access Journals (Sweden)

    Julian Vasilev

    2014-12-01

    Full Text Available The purpose of this article is to give a new approach in calculating the probability of returning a loan. A lot of factors affect the value of the probability. In this article by using statistical and econometric models some influencing factors are proved. The main approach is concerned with applying probit and logit models in loan management institutions. A new aspect of the credit risk analysis is given. Calculating the probability of returning a loan is a difficult task. We assume that specific data fields concerning the contract (month of signing, year of signing, given sum and data fields concerning the borrower of the loan (month of birth, year of birth (age, gender, region, where he/she lives may be independent variables in a binary logistics model with a dependent variable “the probability of returning a loan”. It is proved that the month of signing a contract, the year of signing a contract, the gender and the age of the loan owner do not affect the probability of returning a loan. It is proved that the probability of returning a loan depends on the sum of contract, the remoteness of the loan owner and the month of birth. The probability of returning a loan increases with the increase of the given sum, decreases with the proximity of the customer, increases for people born in the beginning of the year and decreases for people born at the end of the year.

  17. ON PROBABILITY FUNCTION OF TRIP ROUTE CHOICE IN PASSENGER TRANSPORT SYSTEM OF CITIES

    Directory of Open Access Journals (Sweden)

    N. Nefedof

    2014-02-01

    Full Text Available The results of statistical processing of experimental research data in Kharkiv, aimed at determining the relation between the passenger trip choice probability and the actual vehicles waiting time at bus terminals are presented.

  18. Off-diagonal long-range order, cycle probabilities, and condensate fraction in the ideal Bose gas.

    Science.gov (United States)

    Chevallier, Maguelonne; Krauth, Werner

    2007-11-01

    We discuss the relationship between the cycle probabilities in the path-integral representation of the ideal Bose gas, off-diagonal long-range order, and Bose-Einstein condensation. Starting from the Landsberg recursion relation for the canonic partition function, we use elementary considerations to show that in a box of size L3 the sum of the cycle probabilities of length k>L2 equals the off-diagonal long-range order parameter in the thermodynamic limit. For arbitrary systems of ideal bosons, the integer derivative of the cycle probabilities is related to the probability of condensing k bosons. We use this relation to derive the precise form of the pik in the thermodynamic limit. We also determine the function pik for arbitrary systems. Furthermore, we use the cycle probabilities to compute the probability distribution of the maximum-length cycles both at T=0, where the ideal Bose gas reduces to the study of random permutations, and at finite temperature. We close with comments on the cycle probabilities in interacting Bose gases.

  19. Calculation of magnetization curves and probability distribution for monoclinic and uniaxial systems

    International Nuclear Information System (INIS)

    Sobh, Hala A.; Aly, Samy H.; Yehia, Sherif

    2013-01-01

    We present the application of a simple classical statistical mechanics-based model to selected monoclinic and hexagonal model systems. In this model, we treat the magnetization as a classical vector whose angular orientation is dictated by the laws of equilibrium classical statistical mechanics. We calculate for these anisotropic systems, the magnetization curves, energy landscapes and probability distribution for different sets of relevant parameters and magnetic fields of different strengths and directions. Our results demonstrate a correlation between the most probable orientation of the magnetization vector, the system's parameters, and the external magnetic field. -- Highlights: ► We calculate magnetization curves and probability angular distribution of the magnetization. ► The magnetization curves are consistent with probability results for the studied systems. ► Monoclinic and hexagonal systems behave differently due to their different anisotropies

  20. Problems in probability theory, mathematical statistics and theory of random functions

    CERN Document Server

    Sveshnikov, A A

    1979-01-01

    Problem solving is the main thrust of this excellent, well-organized workbook. Suitable for students at all levels in probability theory and statistics, the book presents over 1,000 problems and their solutions, illustrating fundamental theory and representative applications in the following fields: Random Events; Distribution Laws; Correlation Theory; Random Variables; Entropy & Information; Markov Processes; Systems of Random Variables; Limit Theorems; Data Processing; and more.The coverage of topics is both broad and deep, ranging from the most elementary combinatorial problems through lim