WorldWideScience

Sample records for energy probability function

  1. Choice Probability Generating Functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel L; Bierlaire, Michel

    This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications....

  2. Choice probability generating functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel

    2013-01-01

    This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications. The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended...

  3. Generalized Probability Functions

    Directory of Open Access Journals (Sweden)

    Alexandre Souto Martinez

    2009-01-01

    Full Text Available From the integration of nonsymmetrical hyperboles, a one-parameter generalization of the logarithmic function is obtained. Inverting this function, one obtains the generalized exponential function. Motivated by the mathematical curiosity, we show that these generalized functions are suitable to generalize some probability density functions (pdfs. A very reliable rank distribution can be conveniently described by the generalized exponential function. Finally, we turn the attention to the generalization of one- and two-tail stretched exponential functions. We obtain, as particular cases, the generalized error function, the Zipf-Mandelbrot pdf, the generalized Gaussian and Laplace pdf. Their cumulative functions and moments were also obtained analytically.

  4. Probability density function evolution of power systems subject to stochastic variation of renewable energy

    Science.gov (United States)

    Wei, J. Q.; Cong, Y. C.; Xiao, M. Q.

    2018-05-01

    As renewable energies are increasingly integrated into power systems, there is increasing interest in stochastic analysis of power systems.Better techniques should be developed to account for the uncertainty caused by penetration of renewables and consequently analyse its impacts on stochastic stability of power systems. In this paper, the Stochastic Differential Equations (SDEs) are used to represent the evolutionary behaviour of the power systems. The stationary Probability Density Function (PDF) solution to SDEs modelling power systems excited by Gaussian white noise is analysed. Subjected to such random excitation, the Joint Probability Density Function (JPDF) solution to the phase angle and angular velocity is governed by the generalized Fokker-Planck-Kolmogorov (FPK) equation. To solve this equation, the numerical method is adopted. Special measure is taken such that the generalized FPK equation is satisfied in the average sense of integration with the assumed PDF. Both weak and strong intensities of the stochastic excitations are considered in a single machine infinite bus power system. The numerical analysis has the same result as the one given by the Monte Carlo simulation. Potential studies on stochastic behaviour of multi-machine power systems with random excitations are discussed at the end.

  5. Choice probability generating functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel

    2010-01-01

    This paper establishes that every random utility discrete choice model (RUM) has a representation that can be characterized by a choice-probability generating function (CPGF) with specific properties, and that every function with these specific properties is consistent with a RUM. The choice...... probabilities from the RUM are obtained from the gradient of the CPGF. Mixtures of RUM are characterized by logarithmic mixtures of their associated CPGF. The paper relates CPGF to multivariate extreme value distributions, and reviews and extends methods for constructing generating functions for applications....... The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended to competing risk survival models....

  6. A joint probability density function of wind speed and direction for wind energy analysis

    International Nuclear Information System (INIS)

    Carta, Jose A.; Ramirez, Penelope; Bueno, Celia

    2008-01-01

    A very flexible joint probability density function of wind speed and direction is presented in this paper for use in wind energy analysis. A method that enables angular-linear distributions to be obtained with specified marginal distributions has been used for this purpose. For the marginal distribution of wind speed we use a singly truncated from below Normal-Weibull mixture distribution. The marginal distribution of wind direction comprises a finite mixture of von Mises distributions. The proposed model is applied in this paper to wind direction and wind speed hourly data recorded at several weather stations located in the Canary Islands (Spain). The suitability of the distributions is judged from the coefficient of determination R 2 . The conclusions reached are that the joint distribution proposed in this paper: (a) can represent unimodal, bimodal and bitangential wind speed frequency distributions, (b) takes into account the frequency of null winds, (c) represents the wind direction regimes in zones with several modes or prevailing wind directions, (d) takes into account the correlation between wind speeds and its directions. It can therefore be used in several tasks involved in the evaluation process of the wind resources available at a potential site. We also conclude that, in the case of the Canary Islands, the proposed model provides better fits in all the cases analysed than those obtained with the models used in the specialised literature on wind energy

  7. Non-Maxwellian electron energy probability functions in the plume of a SPT-100 Hall thruster

    Science.gov (United States)

    Giono, G.; Gudmundsson, J. T.; Ivchenko, N.; Mazouffre, S.; Dannenmayer, K.; Loubère, D.; Popelier, L.; Merino, M.; Olentšenko, G.

    2018-01-01

    We present measurements of the electron density, the effective electron temperature, the plasma potential, and the electron energy probability function (EEPF) in the plume of a 1.5 kW-class SPT-100 Hall thruster, derived from cylindrical Langmuir probe measurements. The measurements were taken on the plume axis at distances between 550 and 1550 mm from the thruster exit plane, and at different angles from the plume axis at 550 mm for three operating points of the thruster, characterized by different discharge voltages and mass flow rates. The bulk of the electron population can be approximated as a Maxwellian distribution, but the measured distributions were seen to decline faster at higher energy. The measured EEPFs were best modelled with a general EEPF with an exponent α between 1.2 and 1.5, and their axial and angular characteristics were studied for the different operating points of the thruster. As a result, the exponent α from the fitted distribution was seen to be almost constant as a function of the axial distance along the plume, as well as across the angles. However, the exponent α was seen to be affected by the mass flow rate, suggesting a possible relationship with the collision rate, especially close to the thruster exit. The ratio of the specific heats, the γ factor, between the measured plasma parameters was found to be lower than the adiabatic value of 5/3 for each of the thruster settings, indicating the existence of non-trivial kinetic heat fluxes in the near collisionless plume. These results are intended to be used as input and/or testing properties for plume expansion models in further work.

  8. A generalized electron energy probability function for inductively coupled plasmas under conditions of nonlocal electron kinetics

    Science.gov (United States)

    Mouchtouris, S.; Kokkoris, G.

    2018-01-01

    A generalized equation for the electron energy probability function (EEPF) of inductively coupled Ar plasmas is proposed under conditions of nonlocal electron kinetics and diffusive cooling. The proposed equation describes the local EEPF in a discharge and the independent variable is the kinetic energy of electrons. The EEPF consists of a bulk and a depleted tail part and incorporates the effect of the plasma potential, Vp, and pressure. Due to diffusive cooling, the break point of the EEPF is eVp. The pressure alters the shape of the bulk and the slope of the tail part. The parameters of the proposed EEPF are extracted by fitting to measure EEPFs (at one point in the reactor) at different pressures. By coupling the proposed EEPF with a hybrid plasma model, measurements in the gaseous electronics conference reference reactor concerning (a) the electron density and temperature and the plasma potential, either spatially resolved or at different pressure (10-50 mTorr) and power, and (b) the ion current density of the electrode, are well reproduced. The effect of the choice of the EEPF on the results is investigated by a comparison to an EEPF coming from the Boltzmann equation (local electron kinetics approach) and to a Maxwellian EEPF. The accuracy of the results and the fact that the proposed EEPF is predefined renders its use a reliable alternative with a low computational cost compared to stochastic electron kinetic models at low pressure conditions, which can be extended to other gases and/or different electron heating mechanisms.

  9. Invariant probabilities of transition functions

    CERN Document Server

    Zaharopol, Radu

    2014-01-01

    The structure of the set of all the invariant probabilities and the structure of various types of individual invariant probabilities of a transition function are two topics of significant interest in the theory of transition functions, and are studied in this book. The results obtained are useful in ergodic theory and the theory of dynamical systems, which, in turn, can be applied in various other areas (like number theory). They are illustrated using transition functions defined by flows, semiflows, and one-parameter convolution semigroups of probability measures. In this book, all results on transition probabilities that have been published by the author between 2004 and 2008 are extended to transition functions. The proofs of the results obtained are new. For transition functions that satisfy very general conditions the book describes an ergodic decomposition that provides relevant information on the structure of the corresponding set of invariant probabilities. Ergodic decomposition means a splitting of t...

  10. Low-lying electronic states of the OH radical: potential energy curves, dipole moment functions, and transition probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Qin, X.; Zhang, S. D. [Qufu Normal University, Qufu (China)

    2014-12-15

    The six doublet and the two quartet electronic states ({sup 2}Σ{sup +}(2), {sup 2}Σ{sup -}, {sup 2}Π(2), {sup 2}Δ, {sup 4}Σ{sup -}, and {sup 4}Π) of the OH radical have been studied using the multi-reference configuration interaction (MRCI) method where the Davidson correction, core-valence interaction and relativistic effect are considered with large basis sets of aug-cc-pv5z, aug-cc-pcv5z, and cc-pv5z-DK, respectively. Potential energy curves (PECs) and dipole moment functions are also calculated for these states for internuclear distances ranging from 0.05 nm to 0.80 nm. All possible vibrational levels and rotational constants for the bound state X{sup 2}Π and A{sup 2}Σ{sup +} of OH are predicted by numerical solving the radial Schroedinger equation through the Level program, and spectroscopic parameters, which are in good agreements with experimental results, are obtained. Transition dipole moments between the ground state X{sup 2}Π and other excited states are also computed using MRCI, and the transition probability, lifetime, and Franck-Condon factors for the A{sup 2}Σ{sup +} - X{sup 2}Π transition are discussed and compared with existing experimental values.

  11. COVAL, Compound Probability Distribution for Function of Probability Distribution

    International Nuclear Information System (INIS)

    Astolfi, M.; Elbaz, J.

    1979-01-01

    1 - Nature of the physical problem solved: Computation of the probability distribution of a function of variables, given the probability distribution of the variables themselves. 'COVAL' has been applied to reliability analysis of a structure subject to random loads. 2 - Method of solution: Numerical transformation of probability distributions

  12. Psychophysics of the probability weighting function

    Science.gov (United States)

    Takahashi, Taiki

    2011-03-01

    A probability weighting function w(p) for an objective probability p in decision under risk plays a pivotal role in Kahneman-Tversky prospect theory. Although recent studies in econophysics and neuroeconomics widely utilized probability weighting functions, psychophysical foundations of the probability weighting functions have been unknown. Notably, a behavioral economist Prelec (1998) [4] axiomatically derived the probability weighting function w(p)=exp(-() (01e)=1e,w(1)=1), which has extensively been studied in behavioral neuroeconomics. The present study utilizes psychophysical theory to derive Prelec's probability weighting function from psychophysical laws of perceived waiting time in probabilistic choices. Also, the relations between the parameters in the probability weighting function and the probability discounting function in behavioral psychology are derived. Future directions in the application of the psychophysical theory of the probability weighting function in econophysics and neuroeconomics are discussed.

  13. Computation of the Complex Probability Function

    Energy Technology Data Exchange (ETDEWEB)

    Trainer, Amelia Jo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ledwith, Patrick John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-22

    The complex probability function is important in many areas of physics and many techniques have been developed in an attempt to compute it for some z quickly and e ciently. Most prominent are the methods that use Gauss-Hermite quadrature, which uses the roots of the nth degree Hermite polynomial and corresponding weights to approximate the complex probability function. This document serves as an overview and discussion of the use, shortcomings, and potential improvements on the Gauss-Hermite quadrature for the complex probability function.

  14. Proposal for Modified Damage Probability Distribution Functions

    DEFF Research Database (Denmark)

    Pedersen, Preben Terndrup; Hansen, Peter Friis

    1996-01-01

    Immidiately following the Estonia disaster, the Nordic countries establishe a project entitled "Safety of Passenger/RoRo Vessels" As part of this project the present proposal for modified damage stability probability distribution functions has been developed. and submitted to "Sub-committee on st......Immidiately following the Estonia disaster, the Nordic countries establishe a project entitled "Safety of Passenger/RoRo Vessels" As part of this project the present proposal for modified damage stability probability distribution functions has been developed. and submitted to "Sub...

  15. Structure functions are not parton probabilities

    International Nuclear Information System (INIS)

    Brodsky, Stanley J.; Hoyer, Paul; Sannino, Francesco; Marchal, Nils; Peigne, Stephane

    2002-01-01

    The common view that structure functions measured in deep inelastic lepton scattering are determined by the probability of finding quarks and gluons in the target is not correct in gauge theory. We show that gluon exchange between the fast, outgoing partons and target spectators, which is usually assumed to be an irrelevant gauge artifact, affects the leading twist structure functions in a profound way. This observation removes the apparent contradiction between the projectile (eikonal) and target (parton model) views of diffractive and small x B phenomena. The diffractive scattering of the fast outgoing quarks on spectators in the target causes shadowing in the DIS cross section. Thus the depletion of the nuclear structure functions is not intrinsic to the wave function of the nucleus, but is a coherent effect arising from the destructive interference of diffractive channels induced by final state interactions. This is consistent with the Glauber-Gribov interpretation of shadowing as a rescattering effect

  16. Modulation Based on Probability Density Functions

    Science.gov (United States)

    Williams, Glenn L.

    2009-01-01

    A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.

  17. Audio feature extraction using probability distribution function

    Science.gov (United States)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  18. Probability functions in the context of signed involutive meadows

    NARCIS (Netherlands)

    Bergstra, J.A.; Ponse, A.

    2016-01-01

    The Kolmogorov axioms for probability functions are placed in the context of signed meadows. A completeness theorem is stated and proven for the resulting equational theory of probability calculus. Elementary definitions of probability theory are restated in this framework.

  19. Kinetic Analysis of Isothermal Decomposition Process of Sodium Bicarbonate Using the Weibull Probability Function—Estimation of Density Distribution Functions of the Apparent Activation Energies

    Science.gov (United States)

    Janković, Bojan

    2009-10-01

    The decomposition process of sodium bicarbonate (NaHCO3) has been studied by thermogravimetry in isothermal conditions at four different operating temperatures (380 K, 400 K, 420 K, and 440 K). It was found that the experimental integral and differential conversion curves at the different operating temperatures can be successfully described by the isothermal Weibull distribution function with a unique value of the shape parameter ( β = 1.07). It was also established that the Weibull distribution parameters ( β and η) show independent behavior on the operating temperature. Using the integral and differential (Friedman) isoconversional methods, in the conversion (α) range of 0.20 ≤ α ≤ 0.80, the apparent activation energy ( E a ) value was approximately constant ( E a, int = 95.2 kJmol-1 and E a, diff = 96.6 kJmol-1, respectively). The values of E a calculated by both isoconversional methods are in good agreement with the value of E a evaluated from the Arrhenius equation (94.3 kJmol-1), which was expressed through the scale distribution parameter ( η). The Málek isothermal procedure was used for estimation of the kinetic model for the investigated decomposition process. It was found that the two-parameter Šesták-Berggren (SB) autocatalytic model best describes the NaHCO3 decomposition process with the conversion function f(α) = α0.18(1-α)1.19. It was also concluded that the calculated density distribution functions of the apparent activation energies ( ddfE a ’s) are not dependent on the operating temperature, which exhibit the highly symmetrical behavior (shape factor = 1.00). The obtained isothermal decomposition results were compared with corresponding results of the nonisothermal decomposition process of NaHCO3.

  20. A note on iterated function systems with discontinuous probabilities

    International Nuclear Information System (INIS)

    Jaroszewska, Joanna

    2013-01-01

    Highlights: ► Certain iterated function system with discontinuous probabilities is discussed. ► Existence of an invariant measure via the Schauder–Tychonov theorem is established. ► Asymptotic stability of the system under examination is proved. -- Abstract: We consider an example of an iterated function system with discontinuous probabilities. We prove that it posses an invariant probability measure. We also prove that it is asymptotically stable provided probabilities are positive

  1. Path probability of stochastic motion: A functional approach

    Science.gov (United States)

    Hattori, Masayuki; Abe, Sumiyoshi

    2016-06-01

    The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.

  2. Prospect evaluation as a function of numeracy and probability denominator.

    Science.gov (United States)

    Millroth, Philip; Juslin, Peter

    2015-05-01

    This study examines how numeracy and probability denominator (a direct-ratio probability, a relative frequency with denominator 100, a relative frequency with denominator 10,000) affect the evaluation of prospects in an expected-value based pricing task. We expected that numeracy would affect the results due to differences in the linearity of number perception and the susceptibility to denominator neglect with different probability formats. An analysis with functional measurement verified that participants integrated value and probability into an expected value. However, a significant interaction between numeracy and probability format and subsequent analyses of the parameters of cumulative prospect theory showed that the manipulation of probability denominator changed participants' psychophysical response to probability and value. Standard methods in decision research may thus confound people's genuine risk attitude with their numerical capacities and the probability format used. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Probability

    CERN Document Server

    Shiryaev, A N

    1996-01-01

    This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, ergodic theory, weak convergence of probability measures, stationary stochastic processes, and the Kalman-Bucy filter Many examples are discussed in detail, and there are a large number of exercises The book is accessible to advanced undergraduates and can be used as a text for self-study This new edition contains substantial revisions and updated references The reader will find a deeper study of topics such as the distance between probability measures, metrization of weak convergence, and contiguity of probability measures Proofs for a number of some important results which were merely stated in the first edition have been added The author included new material on the probability of large deviations, and on the central limit theorem for sums of dependent random variables

  4. Probability, conditional probability and complementary cumulative distribution functions in performance assessment for radioactive waste disposal

    International Nuclear Information System (INIS)

    Helton, J.C.

    1996-03-01

    A formal description of the structure of several recent performance assessments (PAs) for the Waste Isolation Pilot Plant (WIPP) is given in terms of the following three components: a probability space (S st , S st , p st ) for stochastic uncertainty, a probability space (S su , S su , p su ) for subjective uncertainty and a function (i.e., a random variable) defined on the product space associated with (S st , S st , p st ) and (S su , S su , p su ). The explicit recognition of the existence of these three components allows a careful description of the use of probability, conditional probability and complementary cumulative distribution functions within the WIPP PA. This usage is illustrated in the context of the U.S. Environmental Protection Agency's standard for the geologic disposal of radioactive waste (40 CFR 191, Subpart B). The paradigm described in this presentation can also be used to impose a logically consistent structure on PAs for other complex systems

  5. Probability, conditional probability and complementary cumulative distribution functions in performance assessment for radioactive waste disposal

    International Nuclear Information System (INIS)

    Helton, J.C.

    1996-01-01

    A formal description of the structure of several recent performance assessments (PAs) for the Waste Isolation Pilot Plant (WIPP) is given in terms of the following three components: a probability space (S st , L st , P st ) for stochastic uncertainty, a probability space (S su , L su , P su ) for subjective uncertainty and a function (i.e., a random variable) defined on the product space associated with (S st , L st , P st ) and (S su , L su , P su ). The explicit recognition of the existence of these three components allows a careful description of the use of probability, conditional probability and complementary cumulative distribution functions within the WIPP PA. This usage is illustrated in the context of the US Environmental Protection Agency's standard for the geologic disposal of radioactive waste (40 CFR 191, Subpart B). The paradigm described in this presentation can also be used to impose a logically consistent structure on PAs for other complex systems

  6. Continuation of probability density functions using a generalized Lyapunov approach

    NARCIS (Netherlands)

    Baars, S.; Viebahn, J. P.; Mulder, T. E.; Kuehn, C.; Wubs, F. W.; Dijkstra, H. A.

    2017-01-01

    Techniques from numerical bifurcation theory are very useful to study transitions between steady fluid flow patterns and the instabilities involved. Here, we provide computational methodology to use parameter continuation in determining probability density functions of systems of stochastic partial

  7. On Farmer's line, probability density functions, and overall risk

    International Nuclear Information System (INIS)

    Munera, H.A.; Yadigaroglu, G.

    1986-01-01

    Limit lines used to define quantitative probabilistic safety goals can be categorized according to whether they are based on discrete pairs of event sequences and associated probabilities, on probability density functions (pdf's), or on complementary cumulative density functions (CCDFs). In particular, the concept of the well-known Farmer's line and its subsequent reinterpretations is clarified. It is shown that Farmer's lines are pdf's and, therefore, the overall risk (defined as the expected value of the pdf) that they represent can be easily calculated. It is also shown that the area under Farmer's line is proportional to probability, while the areas under CCDFs are generally proportional to expected value

  8. A fluctuation relation for the probability of energy backscatter

    Science.gov (United States)

    Vela-Martin, Alberto; Jimenez, Javier

    2017-11-01

    We simulate the large scales of an inviscid turbulent flow in a triply periodic box using a dynamic Smagorinsky model for the sub-grid stresses. The flow, which is forced to constant kinetic energy, is fully reversible and can develop a sustained inverse energy cascade. However, due to the large number of degrees freedom, the probability of spontaneous mean inverse energy flux is negligible. In order to quantify the probability of inverse energy cascades, we test a local fluctuation relation of the form log P(A) = - c(V , t) A , where P(A) = p(| Cs|V,t = A) / p(| Cs|V , t = - A) , p is probability, and | Cs|V,t is the average of the least-squared dynamic model coefficient over volume V and time t. This is confirmed when Cs is averaged over sufficiently large domains and long times, and c is found to depend linearly on V and t. In the limit in which V 1 / 3 is of the order of the integral scale and t is of the order of the eddy-turnover time, we recover a global fluctuation relation that predicts a negligible probability of a sustained inverse energy cascade. For smaller V and t, the local fluctuation relation provides useful predictions on the occurrence of local energy backscatter. Funded by the ERC COTURB project.

  9. Decision making generalized by a cumulative probability weighting function

    Science.gov (United States)

    dos Santos, Lindomar Soares; Destefano, Natália; Martinez, Alexandre Souto

    2018-01-01

    Typical examples of intertemporal decision making involve situations in which individuals must choose between a smaller reward, but more immediate, and a larger one, delivered later. Analogously, probabilistic decision making involves choices between options whose consequences differ in relation to their probability of receiving. In Economics, the expected utility theory (EUT) and the discounted utility theory (DUT) are traditionally accepted normative models for describing, respectively, probabilistic and intertemporal decision making. A large number of experiments confirmed that the linearity assumed by the EUT does not explain some observed behaviors, as nonlinear preference, risk-seeking and loss aversion. That observation led to the development of new theoretical models, called non-expected utility theories (NEUT), which include a nonlinear transformation of the probability scale. An essential feature of the so-called preference function of these theories is that the probabilities are transformed by decision weights by means of a (cumulative) probability weighting function, w(p) . We obtain in this article a generalized function for the probabilistic discount process. This function has as particular cases mathematical forms already consecrated in the literature, including discount models that consider effects of psychophysical perception. We also propose a new generalized function for the functional form of w. The limiting cases of this function encompass some parametric forms already proposed in the literature. Far beyond a mere generalization, our function allows the interpretation of probabilistic decision making theories based on the assumption that individuals behave similarly in the face of probabilities and delays and is supported by phenomenological models.

  10. Probability, conditional probability and complementary cumulative distribution functions in performance assessment for radioactive waste disposal

    Energy Technology Data Exchange (ETDEWEB)

    Helton, J.C. [Arizona State Univ., Tempe, AZ (United States)

    1996-03-01

    A formal description of the structure of several recent performance assessments (PAs) for the Waste Isolation Pilot Plant (WIPP) is given in terms of the following three components: a probability space (S{sub st}, S{sub st}, p{sub st}) for stochastic uncertainty, a probability space (S{sub su}, S{sub su}, p{sub su}) for subjective uncertainty and a function (i.e., a random variable) defined on the product space associated with (S{sub st}, S{sub st}, p{sub st}) and (S{sub su}, S{sub su}, p{sub su}). The explicit recognition of the existence of these three components allows a careful description of the use of probability, conditional probability and complementary cumulative distribution functions within the WIPP PA. This usage is illustrated in the context of the U.S. Environmental Protection Agency`s standard for the geologic disposal of radioactive waste (40 CFR 191, Subpart B). The paradigm described in this presentation can also be used to impose a logically consistent structure on PAs for other complex systems.

  11. Wigner function and the probability representation of quantum states

    Directory of Open Access Journals (Sweden)

    Man’ko Margarita A.

    2014-01-01

    Full Text Available The relation of theWigner function with the fair probability distribution called tomographic distribution or quantum tomogram associated with the quantum state is reviewed. The connection of the tomographic picture of quantum mechanics with the integral Radon transform of the Wigner quasidistribution is discussed. The Wigner–Moyal equation for the Wigner function is presented in the form of kinetic equation for the tomographic probability distribution both in quantum mechanics and in the classical limit of the Liouville equation. The calculation of moments of physical observables in terms of integrals with the state tomographic probability distributions is constructed having a standard form of averaging in the probability theory. New uncertainty relations for the position and momentum are written in terms of optical tomograms suitable for directexperimental check. Some recent experiments on checking the uncertainty relations including the entropic uncertainty relations are discussed.

  12. Optimizing an objective function under a bivariate probability model

    NARCIS (Netherlands)

    X. Brusset; N.M. Temme (Nico)

    2007-01-01

    htmlabstractThe motivation of this paper is to obtain an analytical closed form of a quadratic objective function arising from a stochastic decision process with bivariate exponential probability distribution functions that may be dependent. This method is applicable when results need to be

  13. Entanglement probabilities of polymers: a white noise functional approach

    International Nuclear Information System (INIS)

    Bernido, Christopher C; Carpio-Bernido, M Victoria

    2003-01-01

    The entanglement probabilities for a highly flexible polymer to wind n times around a straight polymer are evaluated using white noise analysis. To introduce the white noise functional approach, the one-dimensional random walk problem is taken as an example. The polymer entanglement scenario, viewed as a random walk on a plane, is then treated and the entanglement probabilities are obtained for a magnetic flux confined along the straight polymer, and a case where an entangled polymer is subjected to the potential V = f-dot(s)θ. In the absence of the magnetic flux and the potential V, the entanglement probabilities reduce to a result obtained by Wiegel

  14. Discriminating Among Probability Weighting Functions Using Adaptive Design Optimization

    Science.gov (United States)

    Cavagnaro, Daniel R.; Pitt, Mark A.; Gonzalez, Richard; Myung, Jay I.

    2014-01-01

    Probability weighting functions relate objective probabilities and their subjective weights, and play a central role in modeling choices under risk within cumulative prospect theory. While several different parametric forms have been proposed, their qualitative similarities make it challenging to discriminate among them empirically. In this paper, we use both simulation and choice experiments to investigate the extent to which different parametric forms of the probability weighting function can be discriminated using adaptive design optimization, a computer-based methodology that identifies and exploits model differences for the purpose of model discrimination. The simulation experiments show that the correct (data-generating) form can be conclusively discriminated from its competitors. The results of an empirical experiment reveal heterogeneity between participants in terms of the functional form, with two models (Prelec-2, Linear in Log Odds) emerging as the most common best-fitting models. The findings shed light on assumptions underlying these models. PMID:24453406

  15. Probability-density-function characterization of multipartite entanglement

    International Nuclear Information System (INIS)

    Facchi, P.; Florio, G.; Pascazio, S.

    2006-01-01

    We propose a method to characterize and quantify multipartite entanglement for pure states. The method hinges upon the study of the probability density function of bipartite entanglement and is tested on an ensemble of qubits in a variety of situations. This characterization is also compared to several measures of multipartite entanglement

  16. Numerical Loading of a Maxwellian Probability Distribution Function

    International Nuclear Information System (INIS)

    Lewandowski, J.L.V.

    2003-01-01

    A renormalization procedure for the numerical loading of a Maxwellian probability distribution function (PDF) is formulated. The procedure, which involves the solution of three coupled nonlinear equations, yields a numerically loaded PDF with improved properties for higher velocity moments. This method is particularly useful for low-noise particle-in-cell simulations with electron dynamics

  17. Visualization techniques for spatial probability density function data

    Directory of Open Access Journals (Sweden)

    Udeepta D Bordoloi

    2006-01-01

    Full Text Available Novel visualization methods are presented for spatial probability density function data. These are spatial datasets, where each pixel is a random variable, and has multiple samples which are the results of experiments on that random variable. We use clustering as a means to reduce the information contained in these datasets; and present two different ways of interpreting and clustering the data. The clustering methods are used on two datasets, and the results are discussed with the help of visualization techniques designed for the spatial probability data.

  18. Computing exact bundle compliance control charts via probability generating functions.

    Science.gov (United States)

    Chen, Binchao; Matis, Timothy; Benneyan, James

    2016-06-01

    Compliance to evidenced-base practices, individually and in 'bundles', remains an important focus of healthcare quality improvement for many clinical conditions. The exact probability distribution of composite bundle compliance measures used to develop corresponding control charts and other statistical tests is based on a fairly large convolution whose direct calculation can be computationally prohibitive. Various series expansions and other approximation approaches have been proposed, each with computational and accuracy tradeoffs, especially in the tails. This same probability distribution also arises in other important healthcare applications, such as for risk-adjusted outcomes and bed demand prediction, with the same computational difficulties. As an alternative, we use probability generating functions to rapidly obtain exact results and illustrate the improved accuracy and detection over other methods. Numerical testing across a wide range of applications demonstrates the computational efficiency and accuracy of this approach.

  19. Survival probability in small angle scattering of low energy alkali ions from alkali covered metal surfaces

    International Nuclear Information System (INIS)

    Neskovic, N.; Ciric, D.; Perovic, B.

    1982-01-01

    The survival probability in small angle scattering of low energy alkali ions from alkali covered metal surfaces is considered. The model is based on the momentum approximation. The projectiles are K + ions and the target is the (001)Ni+K surface. The incident energy is 100 eV and the incident angle 5 0 . The interaction potential of the projectile and the target consists of the Born-Mayer, the dipole and the image charge potentials. The transition probability function corresponds to the resonant electron transition to the 4s projectile energy level. (orig.)

  20. Energy-level scheme and transition probabilities of Si-like ions

    International Nuclear Information System (INIS)

    Huang, K.N.

    1984-01-01

    Theoretical energy levels and transition probabilities are presented for 27 low-lying levels of silicon-like ions from Z = 15 to Z = 106. The multiconfiguration Dirac-Fock technique is used to calculate energy levels and wave functions. The Breit interaction and Lamb shift contributions are calculated perturbatively as corrections to the Dirac-Fock energy. The M1 and E2 transitions between the first nine levels and the E1 transitions between excited and the ground levels are presented

  1. Assumed Probability Density Functions for Shallow and Deep Convection

    OpenAIRE

    Steven K Krueger; Peter A Bogenschutz; Marat Khairoutdinov

    2010-01-01

    The assumed joint probability density function (PDF) between vertical velocity and conserved temperature and total water scalars has been suggested to be a relatively computationally inexpensive and unified subgrid-scale (SGS) parameterization for boundary layer clouds and turbulent moments. This paper analyzes the performance of five families of PDFs using large-eddy simulations of deep convection, shallow convection, and a transition from stratocumulus to trade wind cumulus. Three of the PD...

  2. Fitting the Probability Distribution Functions to Model Particulate Matter Concentrations

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh.I.

    2017-01-01

    The main objective of this study is to identify the best probability distribution and the plotting position formula for modeling the concentrations of Total Suspended Particles (TSP) as well as the Particulate Matter with an aerodynamic diameter<10 μm (PM 10 ). The best distribution provides the estimated probabilities that exceed the threshold limit given by the Egyptian Air Quality Limit value (EAQLV) as well the number of exceedance days is estimated. The standard limits of the EAQLV for TSP and PM 10 concentrations are 24-h average of 230 μg/m 3 and 70 μg/m 3 , respectively. Five frequency distribution functions with seven formula of plotting positions (empirical cumulative distribution functions) are compared to fit the average of daily TSP and PM 10 concentrations in year 2014 for Ain Sokhna city. The Quantile-Quantile plot (Q-Q plot) is used as a method for assessing how closely a data set fits a particular distribution. A proper probability distribution that represents the TSP and PM 10 has been chosen based on the statistical performance indicator values. The results show that Hosking and Wallis plotting position combined with Frechet distribution gave the highest fit for TSP and PM 10 concentrations. Burr distribution with the same plotting position follows Frechet distribution. The exceedance probability and days over the EAQLV are predicted using Frechet distribution. In 2014, the exceedance probability and days for TSP concentrations are 0.052 and 19 days, respectively. Furthermore, the PM 10 concentration is found to exceed the threshold limit by 174 days

  3. Exact probability distribution function for the volatility of cumulative production

    Science.gov (United States)

    Zadourian, Rubina; Klümper, Andreas

    2018-04-01

    In this paper we study the volatility and its probability distribution function for the cumulative production based on the experience curve hypothesis. This work presents a generalization of the study of volatility in Lafond et al. (2017), which addressed the effects of normally distributed noise in the production process. Due to its wide applicability in industrial and technological activities we present here the mathematical foundation for an arbitrary distribution function of the process, which we expect will pave the future research on forecasting of the production process.

  4. Sharp Bounds by Probability-Generating Functions and Variable Drift

    DEFF Research Database (Denmark)

    Doerr, Benjamin; Fouz, Mahmoud; Witt, Carsten

    2011-01-01

    We introduce to the runtime analysis of evolutionary algorithms two powerful techniques: probability-generating functions and variable drift analysis. They are shown to provide a clean framework for proving sharp upper and lower bounds. As an application, we improve the results by Doerr et al....... (GECCO 2010) in several respects. First, the upper bound on the expected running time of the most successful quasirandom evolutionary algorithm for the OneMax function is improved from 1.28nln n to 0.982nlnn, which breaks the barrier of nln n posed by coupon-collector processes. Compared to the classical...

  5. Outage Probability Minimization for Energy Harvesting Cognitive Radio Sensor Networks

    Directory of Open Access Journals (Sweden)

    Fan Zhang

    2017-01-01

    Full Text Available The incorporation of cognitive radio (CR capability in wireless sensor networks yields a promising network paradigm known as CR sensor networks (CRSNs, which is able to provide spectrum efficient data communication. However, due to the high energy consumption results from spectrum sensing, as well as subsequent data transmission, the energy supply for the conventional sensor nodes powered by batteries is regarded as a severe bottleneck for sustainable operation. The energy harvesting technique, which gathers energy from the ambient environment, is regarded as a promising solution to perpetually power-up energy-limited devices with a continual source of energy. Therefore, applying the energy harvesting (EH technique in CRSNs is able to facilitate the self-sustainability of the energy-limited sensors. The primary concern of this study is to design sensing-transmission policies to minimize the long-term outage probability of EH-powered CR sensor nodes. We formulate this problem as an infinite-horizon discounted Markov decision process and propose an ϵ-optimal sensing-transmission (ST policy through using the value iteration algorithm. ϵ is the error bound between the ST policy and the optimal policy, which can be pre-defined according to the actual need. Moreover, for a special case that the signal-to-noise (SNR power ratio is sufficiently high, we present an efficient transmission (ET policy and prove that the ET policy achieves the same performance with the ST policy. Finally, extensive simulations are conducted to evaluate the performance of the proposed policies and the impaction of various network parameters.

  6. Energy levels and transition probabilities for Fe XXV ions

    Energy Technology Data Exchange (ETDEWEB)

    Norrington, P.H.; Kingston, A.E.; Boone, A.W. [Department of Applied Maths and Theoretical Physics, Queen' s University, Belfast BT7 1NN (United Kingdom)

    2000-05-14

    The energy levels of the 1s{sup 2}, 1s2l and 1s3l states of helium-like iron Fe XXV have been calculated using two sets of configuration-interaction wavefunctions. One set of wavefunctions was generated using the fully relativistic GRASP code and the other was obtained using CIV3, in which relativistic effects are introduced using the Breit-Pauli approximation. For transitions from the ground state to the n=2 and 3 states and for transitions between the n=2 and 3 states, the calculated excitation energies obtained by these two independent methods are in very good agreement and there is good agreement between these results and recent theoretical and experimental results. However, there is considerable disagreement between the various excitation energies for the transitions among the n=2 and also among the n=3 states. The two sets of wavefunctions are also used to calculate the E1, E2, M1 and M2 transition probabilities between all of the 1s{sup 2}, 1s2l and 1s3l states of helium-like iron Fe XXV. The results from the two calculations are found to be similar and to compare very well with other recent results for {delta}n=1 or 2 transitions. For {delta}n=0 transitions the agreement is much less satisfactory; this is mainly due to differences in the excitation energies. (author)

  7. INTERACTIVE VISUALIZATION OF PROBABILITY AND CUMULATIVE DENSITY FUNCTIONS

    KAUST Repository

    Potter, Kristin; Kirby, Robert Michael; Xiu, Dongbin; Johnson, Chris R.

    2012-01-01

    The probability density function (PDF), and its corresponding cumulative density function (CDF), provide direct statistical insight into the characterization of a random process or field. Typically displayed as a histogram, one can infer probabilities of the occurrence of particular events. When examining a field over some two-dimensional domain in which at each point a PDF of the function values is available, it is challenging to assess the global (stochastic) features present within the field. In this paper, we present a visualization system that allows the user to examine two-dimensional data sets in which PDF (or CDF) information is available at any position within the domain. The tool provides a contour display showing the normed difference between the PDFs and an ansatz PDF selected by the user and, furthermore, allows the user to interactively examine the PDF at any particular position. Canonical examples of the tool are provided to help guide the reader into the mapping of stochastic information to visual cues along with a description of the use of the tool for examining data generated from an uncertainty quantification exercise accomplished within the field of electrophysiology.

  8. Continuation of probability density functions using a generalized Lyapunov approach

    Energy Technology Data Exchange (ETDEWEB)

    Baars, S., E-mail: s.baars@rug.nl [Johann Bernoulli Institute for Mathematics and Computer Science, University of Groningen, P.O. Box 407, 9700 AK Groningen (Netherlands); Viebahn, J.P., E-mail: viebahn@cwi.nl [Centrum Wiskunde & Informatica (CWI), P.O. Box 94079, 1090 GB, Amsterdam (Netherlands); Mulder, T.E., E-mail: t.e.mulder@uu.nl [Institute for Marine and Atmospheric research Utrecht, Department of Physics and Astronomy, Utrecht University, Princetonplein 5, 3584 CC Utrecht (Netherlands); Kuehn, C., E-mail: ckuehn@ma.tum.de [Technical University of Munich, Faculty of Mathematics, Boltzmannstr. 3, 85748 Garching bei München (Germany); Wubs, F.W., E-mail: f.w.wubs@rug.nl [Johann Bernoulli Institute for Mathematics and Computer Science, University of Groningen, P.O. Box 407, 9700 AK Groningen (Netherlands); Dijkstra, H.A., E-mail: h.a.dijkstra@uu.nl [Institute for Marine and Atmospheric research Utrecht, Department of Physics and Astronomy, Utrecht University, Princetonplein 5, 3584 CC Utrecht (Netherlands); School of Chemical and Biomolecular Engineering, Cornell University, Ithaca, NY (United States)

    2017-05-01

    Techniques from numerical bifurcation theory are very useful to study transitions between steady fluid flow patterns and the instabilities involved. Here, we provide computational methodology to use parameter continuation in determining probability density functions of systems of stochastic partial differential equations near fixed points, under a small noise approximation. Key innovation is the efficient solution of a generalized Lyapunov equation using an iterative method involving low-rank approximations. We apply and illustrate the capabilities of the method using a problem in physical oceanography, i.e. the occurrence of multiple steady states of the Atlantic Ocean circulation.

  9. Universal Probability Distribution Function for Bursty Transport in Plasma Turbulence

    International Nuclear Information System (INIS)

    Sandberg, I.; Benkadda, S.; Garbet, X.; Ropokis, G.; Hizanidis, K.; Castillo-Negrete, D. del

    2009-01-01

    Bursty transport phenomena associated with convective motion present universal statistical characteristics among different physical systems. In this Letter, a stochastic univariate model and the associated probability distribution function for the description of bursty transport in plasma turbulence is presented. The proposed stochastic process recovers the universal distribution of density fluctuations observed in plasma edge of several magnetic confinement devices and the remarkable scaling between their skewness S and kurtosis K. Similar statistical characteristics of variabilities have been also observed in other physical systems that are characterized by convection such as the x-ray fluctuations emitted by the Cygnus X-1 accretion disc plasmas and the sea surface temperature fluctuations.

  10. Elements of a function analytic approach to probability.

    Energy Technology Data Exchange (ETDEWEB)

    Ghanem, Roger Georges (University of Southern California, Los Angeles, CA); Red-Horse, John Robert

    2008-02-01

    We first provide a detailed motivation for using probability theory as a mathematical context in which to analyze engineering and scientific systems that possess uncertainties. We then present introductory notes on the function analytic approach to probabilistic analysis, emphasizing the connections to various classical deterministic mathematical analysis elements. Lastly, we describe how to use the approach as a means to augment deterministic analysis methods in a particular Hilbert space context, and thus enable a rigorous framework for commingling deterministic and probabilistic analysis tools in an application setting.

  11. Examining barrier distributions and, in extension, energy derivative of probabilities for surrogate experiments

    International Nuclear Information System (INIS)

    Romain, P.; Duarte, H.; Morillon, B.

    2012-01-01

    The energy derivatives of probabilities are functions suited to a best understanding of certain mechanisms. Applied to compound nuclear reactions, they can bring information on fusion barrier distributions as originally introduced, and also, as presented here, on fission barrier distributions and heights. Extendedly, they permit to access the compound nucleus spin-parity states preferentially populated according to an entrance channel, at a given energy. (authors)

  12. Theoretical derivation of wind power probability distribution function and applications

    International Nuclear Information System (INIS)

    Altunkaynak, Abdüsselam; Erdik, Tarkan; Dabanlı, İsmail; Şen, Zekai

    2012-01-01

    Highlights: ► Derivation of wind power stochastic characteristics are standard deviation and the dimensionless skewness. ► The perturbation is expressions for the wind power statistics from Weibull probability distribution function (PDF). ► Comparisons with the corresponding characteristics of wind speed PDF abides by the Weibull PDF. ► The wind power abides with the Weibull-PDF. -- Abstract: The instantaneous wind power contained in the air current is directly proportional with the cube of the wind speed. In practice, there is a record of wind speeds in the form of a time series. It is, therefore, necessary to develop a formulation that takes into consideration the statistical parameters of such a time series. The purpose of this paper is to derive the general wind power formulation in terms of the statistical parameters by using the perturbation theory, which leads to a general formulation of the wind power expectation and other statistical parameter expressions such as the standard deviation and the coefficient of variation. The formulation is very general and can be applied specifically for any wind speed probability distribution function. Its application to two-parameter Weibull probability distribution of wind speeds is presented in full detail. It is concluded that provided wind speed is distributed according to a Weibull distribution, the wind power could be derived based on wind speed data. It is possible to determine wind power at any desired risk level, however, in practical studies most often 5% or 10% risk levels are preferred and the necessary simple procedure is presented for this purpose in this paper.

  13. Evaluation of probability and hazard in nuclear energy

    International Nuclear Information System (INIS)

    Novikov, V.Ya.; Romanov, N.L.

    1979-01-01

    Various methods of evaluation of accident probability on NPP are proposed because of NPP security statistic evaluation unreliability. The conception of subjective probability for quantitative analysis of security and hazard are described. Intrepretation of probability as real faith of an expert is assumed as a basis of the conception. It is suggested to study the event uncertainty in the framework of subjective probability theory which not only permits but demands to take into account expert opinions when evaluating the probability. These subjective expert evaluations effect to a certain extent the calculation of the usual mathematical event probability. The above technique is advantageous to use for consideration of a separate experiment or random event

  14. Cytoarchitecture, probability maps and functions of the human frontal pole.

    Science.gov (United States)

    Bludau, S; Eickhoff, S B; Mohlberg, H; Caspers, S; Laird, A R; Fox, P T; Schleicher, A; Zilles, K; Amunts, K

    2014-06-01

    The frontal pole has more expanded than any other part in the human brain as compared to our ancestors. It plays an important role for specifically human behavior and cognitive abilities, e.g. action selection (Kovach et al., 2012). Evidence about divergent functions of its medial and lateral part has been provided, both in the healthy brain and in psychiatric disorders. The anatomical correlates of such functional segregation, however, are still unknown due to a lack of stereotaxic, microstructural maps obtained in a representative sample of brains. Here we show that the human frontopolar cortex consists of two cytoarchitectonically and functionally distinct areas: lateral frontopolar area 1 (Fp1) and medial frontopolar area 2 (Fp2). Based on observer-independent mapping in serial, cell-body stained sections of 10 brains, three-dimensional, probabilistic maps of areas Fp1 and Fp2 were created. They show, for each position of the reference space, the probability with which each area was found in a particular voxel. Applying these maps as seed regions for a meta-analysis revealed that Fp1 and Fp2 differentially contribute to functional networks: Fp1 was involved in cognition, working memory and perception, whereas Fp2 was part of brain networks underlying affective processing and social cognition. The present study thus disclosed cortical correlates of a functional segregation of the human frontopolar cortex. The probabilistic maps provide a sound anatomical basis for interpreting neuroimaging data in the living human brain, and open new perspectives for analyzing structure-function relationships in the prefrontal cortex. The new data will also serve as a starting point for further comparative studies between human and non-human primate brains. This allows finding similarities and differences in the organizational principles of the frontal lobe during evolution as neurobiological basis for our behavior and cognitive abilities. Copyright © 2013 Elsevier Inc. All

  15. Probability density functions for CP-violating rephasing invariants

    Science.gov (United States)

    Fortin, Jean-François; Giasson, Nicolas; Marleau, Luc

    2018-05-01

    The implications of the anarchy principle on CP violation in the lepton sector are investigated. A systematic method is introduced to compute the probability density functions for the CP-violating rephasing invariants of the PMNS matrix from the Haar measure relevant to the anarchy principle. Contrary to the CKM matrix which is hierarchical, it is shown that the Haar measure, and hence the anarchy principle, are very likely to lead to the observed PMNS matrix. Predictions on the CP-violating Dirac rephasing invariant |jD | and Majorana rephasing invariant |j1 | are also obtained. They correspond to 〈 |jD | 〉 Haar = π / 105 ≈ 0.030 and 〈 |j1 | 〉 Haar = 1 / (6 π) ≈ 0.053 respectively, in agreement with the experimental hint from T2K of | jDexp | ≈ 0.032 ± 0.005 (or ≈ 0.033 ± 0.003) for the normal (or inverted) hierarchy.

  16. Interactive design of probability density functions for shape grammars

    KAUST Repository

    Dang, Minh

    2015-11-02

    A shape grammar defines a procedural shape space containing a variety of models of the same class, e.g. buildings, trees, furniture, airplanes, bikes, etc. We present a framework that enables a user to interactively design a probability density function (pdf) over such a shape space and to sample models according to the designed pdf. First, we propose a user interface that enables a user to quickly provide preference scores for selected shapes and suggest sampling strategies to decide which models to present to the user to evaluate. Second, we propose a novel kernel function to encode the similarity between two procedural models. Third, we propose a framework to interpolate user preference scores by combining multiple techniques: function factorization, Gaussian process regression, autorelevance detection, and l1 regularization. Fourth, we modify the original grammars to generate models with a pdf proportional to the user preference scores. Finally, we provide evaluations of our user interface and framework parameters and a comparison to other exploratory modeling techniques using modeling tasks in five example shape spaces: furniture, low-rise buildings, skyscrapers, airplanes, and vegetation.

  17. Probability of detection as a function of multiple influencing parameters

    Energy Technology Data Exchange (ETDEWEB)

    Pavlovic, Mato

    2014-10-15

    Non-destructive testing is subject to measurement uncertainties. In safety critical applications the reliability assessment of its capability to detect flaws is therefore necessary. In most applications, the flaw size is the single most important parameter that influences the probability of detection (POD) of the flaw. That is why the POD is typically calculated and expressed as a function of the flaw size. The capability of the inspection system to detect flaws is established by comparing the size of reliably detected flaw with the size of the flaw that is critical for the structural integrity. Applications where several factors have an important influence on the POD are investigated in this dissertation. To devise a reliable estimation of the NDT system capability it is necessary to express the POD as a function of all these factors. A multi-parameter POD model is developed. It enables POD to be calculated and expressed as a function of several influencing parameters. The model was tested on the data from the ultrasonic inspection of copper and cast iron components with artificial flaws. Also, a technique to spatially present POD data called the volume POD is developed. The fusion of the POD data coming from multiple inspections of the same component with different sensors is performed to reach the overall POD of the inspection system.

  18. Parametric Probability Distribution Functions for Axon Diameters of Corpus Callosum

    Directory of Open Access Journals (Sweden)

    Farshid eSepehrband

    2016-05-01

    Full Text Available Axon diameter is an important neuroanatomical characteristic of the nervous system that alters in the course of neurological disorders such as multiple sclerosis. Axon diameters vary, even within a fiber bundle, and are not normally distributed. An accurate distribution function is therefore beneficial, either to describe axon diameters that are obtained from a direct measurement technique (e.g., microscopy, or to infer them indirectly (e.g., using diffusion-weighted MRI. The gamma distribution is a common choice for this purpose (particularly for the inferential approach because it resembles the distribution profile of measured axon diameters which has been consistently shown to be non-negative and right-skewed. In this study we compared a wide range of parametric probability distribution functions against empirical data obtained from electron microscopy images. We observed that the gamma distribution fails to accurately describe the main characteristics of the axon diameter distribution, such as location and scale of the mode and the profile of distribution tails. We also found that the generalized extreme value distribution consistently fitted the measured distribution better than other distribution functions. This suggests that there may be distinct subpopulations of axons in the corpus callosum, each with their own distribution profiles. In addition, we observed that several other distributions outperformed the gamma distribution, yet had the same number of unknown parameters; these were the inverse Gaussian, log normal, log logistic and Birnbaum-Saunders distributions.

  19. Probability of detection as a function of multiple influencing parameters

    International Nuclear Information System (INIS)

    Pavlovic, Mato

    2014-01-01

    Non-destructive testing is subject to measurement uncertainties. In safety critical applications the reliability assessment of its capability to detect flaws is therefore necessary. In most applications, the flaw size is the single most important parameter that influences the probability of detection (POD) of the flaw. That is why the POD is typically calculated and expressed as a function of the flaw size. The capability of the inspection system to detect flaws is established by comparing the size of reliably detected flaw with the size of the flaw that is critical for the structural integrity. Applications where several factors have an important influence on the POD are investigated in this dissertation. To devise a reliable estimation of the NDT system capability it is necessary to express the POD as a function of all these factors. A multi-parameter POD model is developed. It enables POD to be calculated and expressed as a function of several influencing parameters. The model was tested on the data from the ultrasonic inspection of copper and cast iron components with artificial flaws. Also, a technique to spatially present POD data called the volume POD is developed. The fusion of the POD data coming from multiple inspections of the same component with different sensors is performed to reach the overall POD of the inspection system.

  20. Surprisal analysis and probability matrices for rotational energy transfer

    International Nuclear Information System (INIS)

    Levine, R.D.; Bernstein, R.B.; Kahana, P.; Procaccia, I.; Upchurch, E.T.

    1976-01-01

    The information-theoretic approach is applied to the analysis of state-to-state rotational energy transfer cross sections. The rotational surprisal is evaluated in the usual way, in terms of the deviance of the cross sections from their reference (''prior'') values. The surprisal is found to be an essentially linear function of the energy transferred. This behavior accounts for the experimentally observed exponential gap law for the hydrogen halide systems. The data base here analyzed (taken from the literature) is largely computational in origin: quantal calculations for the hydrogenic systems H 2 +H, He, Li + ; HD+He; D 2 +H and for the N 2 +Ar system; and classical trajectory results for H 2 +Li + ; D 2 +Li + and N 2 +Ar. The surprisal analysis not only serves to compact a large body of data but also aids in the interpretation of the results. A single surprisal parameter theta/subR/ suffices to account for the (relative) magnitude of all state-to-state inelastic cross sections at a given energy

  1. Probability Density Function Method for Observing Reconstructed Attractor Structure

    Institute of Scientific and Technical Information of China (English)

    陆宏伟; 陈亚珠; 卫青

    2004-01-01

    Probability density function (PDF) method is proposed for analysing the structure of the reconstructed attractor in computing the correlation dimensions of RR intervals of ten normal old men. PDF contains important information about the spatial distribution of the phase points in the reconstructed attractor. To the best of our knowledge, it is the first time that the PDF method is put forward for the analysis of the reconstructed attractor structure. Numerical simulations demonstrate that the cardiac systems of healthy old men are about 6 - 6.5 dimensional complex dynamical systems. It is found that PDF is not symmetrically distributed when time delay is small, while PDF satisfies Gaussian distribution when time delay is big enough. A cluster effect mechanism is presented to explain this phenomenon. By studying the shape of PDFs, that the roles played by time delay are more important than embedding dimension in the reconstruction is clearly indicated. Results have demonstrated that the PDF method represents a promising numerical approach for the observation of the reconstructed attractor structure and may provide more information and new diagnostic potential of the analyzed cardiac system.

  2. Measurement of low energy neutrino absorption probability in thallium 205

    International Nuclear Information System (INIS)

    Freedman, M.S.

    1986-01-01

    A major aspect of the P-P neutrino flux determination using thallium 205 is the very difficult problem of experimentally demonstrating the neutrino reaction cross section with about 10% accuracy. One will soon be able to completely strip the electrons from atomic thallium 205 and to maintain the bare nucleus in this state in the heavy storage ring to be built at GSI Darmstadt. This nucleus can decay by emitting a beta-minus particle into the bound K-level of the daughter lead 205 ion as the only energetically open decay channel, (plus, of course, an antineutrino). This single channel beta decay explores the same nuclear wave functions of initial and final states as does the neutrino capture in atomic thallium 205, and thus its probability or rate is governed by the same nuclear matrix elements that affect both weak interactions. Measuring the rate of accumulation of lead 205 ions in the circulating beam of thallium 205 ions gives directly the cross section of the neutrino capture reaction. The calculations of the expected rates under realistic experimental conditions will be shown to be very favorable for the measurement. A special calibration experiment to verify this method and check the theoretical calculations will be suggested. Finally, the neutrino cross section calculation based on the observed rate of the single channel beta-minus decay reaction will be shown. Demonstrating bound state beta decay may be the first verification of the theory of this very important process that influences beta decay rates of several isotopes in stellar interiors, e.g., Re-187, that play important roles in geologic and cosmologic dating and nucleosynthesis. 21 refs., 2 figs

  3. Probability of K atomic shell ionization by heavy particles impact, in functions of the scattering angle

    International Nuclear Information System (INIS)

    Oliveira, P.M.C. de.

    1976-12-01

    A method of calculation of the K atomic shell ionization probability by heavy particles impact, in the semi-classical approximation is presented. In this approximation, the projectile has a classical trajectory. The potential energy due to the projectile is taken as perturbation of the Hamiltonian of the neutral atom. We use scaled Thomas-Fermi wave function for the atomic electrons. The method is valid for intermediate atomic number elements and particle energies of some MeV. Probabilities are calculated for the case of Ag (Z = 47) and protons of 1 and 2 MeV. Results are given as function of scattering angle, and agree well known experimental data and also improve older calculations. (Author) [pt

  4. Assumed Probability Density Functions for Shallow and Deep Convection

    Directory of Open Access Journals (Sweden)

    Steven K Krueger

    2010-10-01

    Full Text Available The assumed joint probability density function (PDF between vertical velocity and conserved temperature and total water scalars has been suggested to be a relatively computationally inexpensive and unified subgrid-scale (SGS parameterization for boundary layer clouds and turbulent moments. This paper analyzes the performance of five families of PDFs using large-eddy simulations of deep convection, shallow convection, and a transition from stratocumulus to trade wind cumulus. Three of the PDF families are based on the double Gaussian form and the remaining two are the single Gaussian and a Double Delta Function (analogous to a mass flux model. The assumed PDF method is tested for grid sizes as small as 0.4 km to as large as 204.8 km. In addition, studies are performed for PDF sensitivity to errors in the input moments and for how well the PDFs diagnose some higher-order moments. In general, the double Gaussian PDFs more accurately represent SGS cloud structure and turbulence moments in the boundary layer compared to the single Gaussian and Double Delta Function PDFs for the range of grid sizes tested. This is especially true for small SGS cloud fractions. While the most complex PDF, Lewellen-Yoh, better represents shallow convective cloud properties (cloud fraction and liquid water mixing ratio compared to the less complex Analytic Double Gaussian 1 PDF, there appears to be no advantage in implementing Lewellen-Yoh for deep convection. However, the Analytic Double Gaussian 1 PDF better represents the liquid water flux, is less sensitive to errors in the input moments, and diagnoses higher order moments more accurately. Between the Lewellen-Yoh and Analytic Double Gaussian 1 PDFs, it appears that neither family is distinctly better at representing cloudy layers. However, due to the reduced computational cost and fairly robust results, it appears that the Analytic Double Gaussian 1 PDF could be an ideal family for SGS cloud and turbulence

  5. Research on Energy-Saving Design of Overhead Travelling Crane Camber Based on Probability Load Distribution

    Directory of Open Access Journals (Sweden)

    Tong Yifei

    2014-01-01

    Full Text Available Crane is a mechanical device, used widely to move materials in modern production. It is reported that the energy consumptions of China are at least 5–8 times of other developing countries. Thus, energy consumption becomes an unavoidable topic. There are several reasons influencing the energy loss, and the camber of the girder is the one not to be neglected. In this paper, the problem of the deflections induced by the moving payload in the girder of overhead travelling crane is examined. The evaluation of a camber giving a counterdeflection of the girder is proposed in order to get minimum energy consumptions for trolley to move along a nonstraight support. To this aim, probabilistic payload distributions are considered instead of fixed or rated loads involved in other researches. Taking 50/10 t bridge crane as a research object, the probability loads are determined by analysis of load distribution density functions. According to load distribution, camber design under different probability loads is discussed in detail as well as energy consumptions distribution. The research results provide the design reference of reasonable camber to obtain the least energy consumption for climbing corresponding to different P0; thus energy-saving design can be achieved.

  6. Interactive design of probability density functions for shape grammars

    KAUST Repository

    Dang, Minh; Lienhard, Stefan; Ceylan, Duygu; Neubert, Boris; Wonka, Peter; Pauly, Mark

    2015-01-01

    A shape grammar defines a procedural shape space containing a variety of models of the same class, e.g. buildings, trees, furniture, airplanes, bikes, etc. We present a framework that enables a user to interactively design a probability density

  7. On the discretization of probability density functions and the ...

    Indian Academy of Sciences (India)

    important for most applications or theoretical problems of interest. In statistics ... In probability theory, statistics, statistical mechanics, communication theory, and other .... (1) by taking advantage of SMVT as a general mathematical approach.

  8. Relationship between the Wigner function and the probability density function in quantum phase space representation

    International Nuclear Information System (INIS)

    Li Qianshu; Lue Liqiang; Wei Gongmin

    2004-01-01

    This paper discusses the relationship between the Wigner function, along with other related quasiprobability distribution functions, and the probability density distribution function constructed from the wave function of the Schroedinger equation in quantum phase space, as formulated by Torres-Vega and Frederick (TF). At the same time, a general approach in solving the wave function of the Schroedinger equation of TF quantum phase space theory is proposed. The relationship of the wave functions between the TF quantum phase space representation and the coordinate or momentum representation is thus revealed

  9. Pairwise contact energy statistical potentials can help to find probability of point mutations.

    Science.gov (United States)

    Saravanan, K M; Suvaithenamudhan, S; Parthasarathy, S; Selvaraj, S

    2017-01-01

    To adopt a particular fold, a protein requires several interactions between its amino acid residues. The energetic contribution of these residue-residue interactions can be approximated by extracting statistical potentials from known high resolution structures. Several methods based on statistical potentials extracted from unrelated proteins are found to make a better prediction of probability of point mutations. We postulate that the statistical potentials extracted from known structures of similar folds with varying sequence identity can be a powerful tool to examine probability of point mutation. By keeping this in mind, we have derived pairwise residue and atomic contact energy potentials for the different functional families that adopt the (α/β) 8 TIM-Barrel fold. We carried out computational point mutations at various conserved residue positions in yeast Triose phosphate isomerase enzyme for which experimental results are already reported. We have also performed molecular dynamics simulations on a subset of point mutants to make a comparative study. The difference in pairwise residue and atomic contact energy of wildtype and various point mutations reveals probability of mutations at a particular position. Interestingly, we found that our computational prediction agrees with the experimental studies of Silverman et al. (Proc Natl Acad Sci 2001;98:3092-3097) and perform better prediction than i Mutant and Cologne University Protein Stability Analysis Tool. The present work thus suggests deriving pairwise contact energy potentials and molecular dynamics simulations of functionally important folds could help us to predict probability of point mutations which may ultimately reduce the time and cost of mutation experiments. Proteins 2016; 85:54-64. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  10. ERF/ERFC, Calculation of Error Function, Complementary Error Function, Probability Integrals

    International Nuclear Information System (INIS)

    Vogel, J.E.

    1983-01-01

    1 - Description of problem or function: ERF and ERFC are used to compute values of the error function and complementary error function for any real number. They may be used to compute other related functions such as the normal probability integrals. 4. Method of solution: The error function and complementary error function are approximated by rational functions. Three such rational approximations are used depending on whether - x .GE.4.0. In the first region the error function is computed directly and the complementary error function is computed via the identity erfc(x)=1.0-erf(x). In the other two regions the complementary error function is computed directly and the error function is computed from the identity erf(x)=1.0-erfc(x). The error function and complementary error function are real-valued functions of any real argument. The range of the error function is (-1,1). The range of the complementary error function is (0,2). 5. Restrictions on the complexity of the problem: The user is cautioned against using ERF to compute the complementary error function by using the identity erfc(x)=1.0-erf(x). This subtraction may cause partial or total loss of significance for certain values of x

  11. 77 FR 5711 - Guidelines for Determining Probability of Causation Under the Energy Employees Occupational...

    Science.gov (United States)

    2012-02-06

    ... Guidelines for Determining Probability of Causation Under the Energy Employees Occupational Illness... provide a technical review of a proposed amendment to the probability of causation guidelines.\\2\\ All of..., and hence had required DOL to assign a probability of causation value of ``zero.'' There were two...

  12. Transition probabilities and dissociation energies of MnH and MnD molecules

    International Nuclear Information System (INIS)

    Nagarajan, K.; Rajamanickam, N.

    1997-01-01

    The Frank-Condon factors (vibrational transition probabilities) and r-centroids have been evaluated by the more reliable numerical integration procedure for the bands of A-X system of MnH and MnD molecules, using a suitable potential. By fitting the Hulburt- Hirschfelder function to the experimental potential curve using correlation coefficient, the dissociation energy for the electronic ground states of MnH and MnD molecules, respectively have been estimated as D 0 0 =251±5 KJ.mol -1 and D 0 0 =312±6 KJ.mol -1 . (authors)

  13. Comparison of density estimators. [Estimation of probability density functions

    Energy Technology Data Exchange (ETDEWEB)

    Kao, S.; Monahan, J.F.

    1977-09-01

    Recent work in the field of probability density estimation has included the introduction of some new methods, such as the polynomial and spline methods and the nearest neighbor method, and the study of asymptotic properties in depth. This earlier work is summarized here. In addition, the computational complexity of the various algorithms is analyzed, as are some simulations. The object is to compare the performance of the various methods in small samples and their sensitivity to change in their parameters, and to attempt to discover at what point a sample is so small that density estimation can no longer be worthwhile. (RWR)

  14. The force distribution probability function for simple fluids by density functional theory.

    Science.gov (United States)

    Rickayzen, G; Heyes, D M

    2013-02-28

    Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.

  15. On the probability density interpretation of smoothed Wigner functions

    International Nuclear Information System (INIS)

    De Aguiar, M.A.M.; Ozorio de Almeida, A.M.

    1990-01-01

    It has been conjectured that the averages of the Wigner function over phase space volumes, larger than those of minimum uncertainty, are always positive. This is true for Gaussian averaging, so that the Husimi distribution is positive. However, we provide a specific counterexample for the averaging with a discontinuous hat function. The analysis of the specific system of a one-dimensional particle in a box also elucidates the respective advantages of the Wigner and the Husimi functions for the study of the semiclassical limit. The falsification of the averaging conjecture is shown not to depend on the discontinuities of the hat function, by considering the latter as the limit of a sequence of analytic functions. (author)

  16. Systematics of the breakup probability function for {sup 6}Li and {sup 7}Li projectiles

    Energy Technology Data Exchange (ETDEWEB)

    Capurro, O.A., E-mail: capurro@tandar.cnea.gov.ar [Laboratorio TANDAR, Comisión Nacional de Energía Atómica, Av. General Paz 1499, B1650KNA San Martín, Buenos Aires (Argentina); Pacheco, A.J.; Arazi, A. [Laboratorio TANDAR, Comisión Nacional de Energía Atómica, Av. General Paz 1499, B1650KNA San Martín, Buenos Aires (Argentina); CONICET, Av. Rivadavia 1917, C1033AAJ Buenos Aires (Argentina); Carnelli, P.F.F. [CONICET, Av. Rivadavia 1917, C1033AAJ Buenos Aires (Argentina); Instituto de Investigación e Ingeniería Ambiental, Universidad Nacional de San Martín, 25 de Mayo y Francia, B1650BWA San Martín, Buenos Aires (Argentina); Fernández Niello, J.O. [Laboratorio TANDAR, Comisión Nacional de Energía Atómica, Av. General Paz 1499, B1650KNA San Martín, Buenos Aires (Argentina); CONICET, Av. Rivadavia 1917, C1033AAJ Buenos Aires (Argentina); Instituto de Investigación e Ingeniería Ambiental, Universidad Nacional de San Martín, 25 de Mayo y Francia, B1650BWA San Martín, Buenos Aires (Argentina); and others

    2016-01-15

    Experimental non-capture breakup cross sections can be used to determine the probability of projectile and ejectile fragmentation in nuclear reactions involving weakly bound nuclei. Recently, the probability of both type of dissociations has been analyzed in nuclear reactions involving {sup 9}Be projectiles onto various heavy targets at sub-barrier energies. In the present work we extend this kind of systematic analysis to the case of {sup 6}Li and {sup 7}Li projectiles with the purpose of investigating general features of projectile-like breakup probabilities for reactions induced by stable weakly bound nuclei. For that purpose we have obtained the probabilities of projectile and ejectile breakup for a large number of systems, starting from a compilation of the corresponding reported non-capture breakup cross sections. We parametrize the results in accordance with the previous studies for the case of beryllium projectiles, and we discuss their systematic behavior as a function of the projectile, the target mass and the reaction Q-value.

  17. Uncertainty plus Prior Equals Rational Bias: An Intuitive Bayesian Probability Weighting Function

    Science.gov (United States)

    Fennell, John; Baddeley, Roland

    2012-01-01

    Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several…

  18. Consolidity analysis for fully fuzzy functions, matrices, probability and statistics

    Directory of Open Access Journals (Sweden)

    Walaa Ibrahim Gabr

    2015-03-01

    Full Text Available The paper presents a comprehensive review of the know-how for developing the systems consolidity theory for modeling, analysis, optimization and design in fully fuzzy environment. The solving of systems consolidity theory included its development for handling new functions of different dimensionalities, fuzzy analytic geometry, fuzzy vector analysis, functions of fuzzy complex variables, ordinary differentiation of fuzzy functions and partial fraction of fuzzy polynomials. On the other hand, the handling of fuzzy matrices covered determinants of fuzzy matrices, the eigenvalues of fuzzy matrices, and solving least-squares fuzzy linear equations. The approach demonstrated to be also applicable in a systematic way in handling new fuzzy probabilistic and statistical problems. This included extending the conventional probabilistic and statistical analysis for handling fuzzy random data. Application also covered the consolidity of fuzzy optimization problems. Various numerical examples solved have demonstrated that the new consolidity concept is highly effective in solving in a compact form the propagation of fuzziness in linear, nonlinear, multivariable and dynamic problems with different types of complexities. Finally, it is demonstrated that the implementation of the suggested fuzzy mathematics can be easily embedded within normal mathematics through building special fuzzy functions library inside the computational Matlab Toolbox or using other similar software languages.

  19. Assembly for the measurement of the most probable energy of directed electron radiation

    International Nuclear Information System (INIS)

    Geske, G.

    1987-01-01

    This invention relates to a setup for the measurement of the most probable energy of directed electron radiation up to 50 MeV. The known energy-range relationship with regard to the absorption of electron radiation in matter is utilized by an absorber with two groups of interconnected radiation detectors embedded in it. The most probable electron beam energy is derived from the quotient of both groups' signals

  20. 76 FR 36891 - Guidelines for Determining Probability of Causation Under the Energy Employees Occupational...

    Science.gov (United States)

    2011-06-23

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES 42 CFR Part 81 [Docket Number NIOSH-0209] RIN 0920-AA39 Guidelines for Determining Probability of Causation Under the Energy Employees Occupational Illness...: HHS published a proposed rule entitled ``Guidelines for Determining Probability of Causation Under the...

  1. Impact of proof test interval and coverage on probability of failure of safety instrumented function

    International Nuclear Information System (INIS)

    Jin, Jianghong; Pang, Lei; Hu, Bin; Wang, Xiaodong

    2016-01-01

    Highlights: • Introduction of proof test coverage makes the calculation of the probability of failure for SIF more accurate. • The probability of failure undetected by proof test is independently defined as P TIF and calculated. • P TIF is quantified using reliability block diagram and simple formula of PFD avg . • Improving proof test coverage and adopting reasonable test period can reduce the probability of failure for SIF. - Abstract: Imperfection of proof test can result in the safety function failure of safety instrumented system (SIS) at any time in its life period. IEC61508 and other references ignored or only elementarily analyzed the imperfection of proof test. In order to further study the impact of the imperfection of proof test on the probability of failure for safety instrumented function (SIF), the necessity of proof test and influence of its imperfection on system performance was first analyzed theoretically. The probability of failure for safety instrumented function resulted from the imperfection of proof test was defined as probability of test independent failures (P TIF ), and P TIF was separately calculated by introducing proof test coverage and adopting reliability block diagram, with reference to the simplified calculation formula of average probability of failure on demand (PFD avg ). Research results show that: the shorter proof test period and the higher proof test coverage indicate the smaller probability of failure for safety instrumented function. The probability of failure for safety instrumented function which is calculated by introducing proof test coverage will be more accurate.

  2. Energies and transition probabilities from the full solution of nuclear quadrupole-octupole model

    International Nuclear Information System (INIS)

    Strecker, M.; Lenske, H.; Minkov, N.

    2013-01-01

    A collective model of nuclear quadrupole-octupole vibrations and rotations, originally restricted to a coherent interplay between quadrupole and octupole modes, is now developed for application beyond this restriction. The eigenvalue problem is solved by diagonalizing the unrestricted Hamiltonian in the basis of the analytic solution obtained in the case of the coherent-mode assumption. Within this scheme the yrast alternating-parity band is constructed by the lowest eigenvalues having the appropriate parity at given angular momentum. Additionally we include the calculation of transition probabilities which are fitted with the energies simultaneously. As a result we obtain a unique set of parameters. The obtained model parameters unambiguously determine the shape of the quadrupole-octupole potential. From the resulting wave functions quadrupole deformation expectation values are calculated which are found to be in agreement with experimental values. (author)

  3. Robust functional statistics applied to Probability Density Function shape screening of sEMG data.

    Science.gov (United States)

    Boudaoud, S; Rix, H; Al Harrach, M; Marin, F

    2014-01-01

    Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.

  4. Estimation of functional failure probability of passive systems based on subset simulation method

    International Nuclear Information System (INIS)

    Wang Dongqing; Wang Baosheng; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to solve the problem of multi-dimensional epistemic uncertainties and small functional failure probability of passive systems, an innovative reliability analysis algorithm called subset simulation based on Markov chain Monte Carlo was presented. The method is found on the idea that a small failure probability can be expressed as a product of larger conditional failure probabilities by introducing a proper choice of intermediate failure events. Markov chain Monte Carlo simulation was implemented to efficiently generate conditional samples for estimating the conditional failure probabilities. Taking the AP1000 passive residual heat removal system, for example, the uncertainties related to the model of a passive system and the numerical values of its input parameters were considered in this paper. And then the probability of functional failure was estimated with subset simulation method. The numerical results demonstrate that subset simulation method has the high computing efficiency and excellent computing accuracy compared with traditional probability analysis methods. (authors)

  5. Probability laws related to the Jacobi theta and Riemann zeta function and Brownian excursions

    OpenAIRE

    Biane, P.; Pitman, J.; Yor, M.

    1999-01-01

    This paper reviews known results which connect Riemann's integral representations of his zeta function, involving Jacobi's theta function and its derivatives, to some particular probability laws governing sums of independent exponential variables. These laws are related to one-dimensional Brownian motion and to higher dimensional Bessel processes. We present some characterizations of these probability laws, and some approximations of Riemann's zeta function which are related to these laws.

  6. Uncertainty plus prior equals rational bias: an intuitive Bayesian probability weighting function.

    Science.gov (United States)

    Fennell, John; Baddeley, Roland

    2012-10-01

    Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several nonexpected utility theories, including rank-dependent models and prospect theory; here, we propose a Bayesian approach to the probability weighting function and, with it, a psychological rationale. In the real world, uncertainty is ubiquitous and, accordingly, the optimal strategy is to combine probability statements with prior information using Bayes' rule. First, we show that any reasonable prior on probabilities leads to 2 of the observed effects; overweighting of low probabilities and underweighting of high probabilities. We then investigate 2 plausible kinds of priors: informative priors based on previous experience and uninformative priors of ignorance. Individually, these priors potentially lead to large problems of bias and inefficiency, respectively; however, when combined using Bayesian model comparison methods, both forms of prior can be applied adaptively, gaining the efficiency of empirical priors and the robustness of ignorance priors. We illustrate this for the simple case of generic good and bad options, using Internet blogs to estimate the relevant priors of inference. Given this combined ignorant/informative prior, the Bayesian probability weighting function is not only robust and efficient but also matches all of the major characteristics of the distortions found in empirical research. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  7. Compact baby universe model in ten dimension and probability function of quantum gravity

    International Nuclear Information System (INIS)

    Yan Jun; Hu Shike

    1991-01-01

    The quantum probability functions are calculated for ten-dimensional compact baby universe model. The authors find that the probability for the Yang-Mills baby universe to undergo a spontaneous compactification down to a four-dimensional spacetime is greater than that to remain in the original homogeneous multidimensional state. Some questions about large-wormhole catastrophe are also discussed

  8. Energy density functional analysis of shape coexistence in 44S

    International Nuclear Information System (INIS)

    Li, Z. P.; Yao, J. M.; Vretenar, D.; Nikšić, T.; Meng, J.

    2012-01-01

    The structure of low-energy collective states in the neutron-rich nucleus 44 S is analyzed using a microscopic collective Hamiltonian model based on energy density functionals (EDFs). The calculated triaxial energy map, low-energy spectrum and corresponding probability distributions indicate a coexistence of prolate and oblate shapes in this nucleus.

  9. Probability function of breaking-limited surface elevation. [wind generated waves of ocean

    Science.gov (United States)

    Tung, C. C.; Huang, N. E.; Yuan, Y.; Long, S. R.

    1989-01-01

    The effect of wave breaking on the probability function of surface elevation is examined. The surface elevation limited by wave breaking zeta sub b(t) is first related to the original wave elevation zeta(t) and its second derivative. An approximate, second-order, nonlinear, non-Gaussian model for zeta(t) of arbitrary but moderate bandwidth is presented, and an expression for the probability density function zeta sub b(t) is derived. The results show clearly that the effect of wave breaking on the probability density function of surface elevation is to introduce a secondary hump on the positive side of the probability density function, a phenomenon also observed in wind wave tank experiments.

  10. Nonlocal kinetic energy functionals by functional integration

    Science.gov (United States)

    Mi, Wenhui; Genova, Alessandro; Pavanello, Michele

    2018-05-01

    Since the seminal studies of Thomas and Fermi, researchers in the Density-Functional Theory (DFT) community are searching for accurate electron density functionals. Arguably, the toughest functional to approximate is the noninteracting kinetic energy, Ts[ρ], the subject of this work. The typical paradigm is to first approximate the energy functional and then take its functional derivative, δ/Ts[ρ ] δ ρ (r ) , yielding a potential that can be used in orbital-free DFT or subsystem DFT simulations. Here, this paradigm is challenged by constructing the potential from the second-functional derivative via functional integration. A new nonlocal functional for Ts[ρ] is prescribed [which we dub Mi-Genova-Pavanello (MGP)] having a density independent kernel. MGP is constructed to satisfy three exact conditions: (1) a nonzero "Kinetic electron" arising from a nonzero exchange hole; (2) the second functional derivative must reduce to the inverse Lindhard function in the limit of homogenous densities; (3) the potential is derived from functional integration of the second functional derivative. Pilot calculations show that MGP is capable of reproducing accurate equilibrium volumes, bulk moduli, total energy, and electron densities for metallic (body-centered cubic, face-centered cubic) and semiconducting (crystal diamond) phases of silicon as well as of III-V semiconductors. The MGP functional is found to be numerically stable typically reaching self-consistency within 12 iterations of a truncated Newton minimization algorithm. MGP's computational cost and memory requirements are low and comparable to the Wang-Teter nonlocal functional or any generalized gradient approximation functional.

  11. The distribution function of a probability measure on a space with a fractal structure

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Granero, M.A.; Galvez-Rodriguez, J.F.

    2017-07-01

    In this work we show how to define a probability measure with the help of a fractal structure. One of the keys of this approach is to use the completion of the fractal structure. Then we use the theory of a cumulative distribution function on a Polish ultrametric space and describe it in this context. Finally, with the help of fractal structures, we prove that a function satisfying the properties of a cumulative distribution function on a Polish ultrametric space is a cumulative distribution function with respect to some probability measure on the space. (Author)

  12. Blue functions: probability and current density propagators in non-relativistic quantum mechanics

    International Nuclear Information System (INIS)

    Withers, L P Jr

    2011-01-01

    Like a Green function to propagate a particle's wavefunction in time, a Blue function is introduced to propagate the particle's probability and current density. Accordingly, the complete Blue function has four components. They are constructed from path integrals involving a quantity like the action that we call the motion. The Blue function acts on the displaced probability density as the kernel of an integral operator. As a result, we find that the Wigner density occurs as an expression for physical propagation. We also show that, in quantum mechanics, the displaced current density is conserved bilocally (in two places at one time), as expressed by a generalized continuity equation. (paper)

  13. Diagnostic probability function for acute coronary heart disease garnered from experts' tacit knowledge.

    Science.gov (United States)

    Steurer, Johann; Held, Ulrike; Miettinen, Olli S

    2013-11-01

    Knowing about a diagnostic probability requires general knowledge about the way in which the probability depends on the diagnostic indicators involved in the specification of the case at issue. Diagnostic probability functions (DPFs) are generally unavailable at present. Our objective was to illustrate how diagnostic experts' case-specific tacit knowledge about diagnostic probabilities could be garnered in the form of DPFs. Focusing on diagnosis of acute coronary heart disease (ACHD), we presented doctors with extensive experience in hospitals' emergency departments a set of hypothetical cases specified in terms of an inclusive set of diagnostic indicators. We translated the medians of these experts' case-specific probabilities into a logistic DPF for ACHD. The principal result was the experts' typical diagnostic probability for ACHD as a joint function of the set of diagnostic indicators. A related result of note was the finding that the experts' probabilities in any given case had a surprising degree of variability. Garnering diagnostic experts' case-specific tacit knowledge about diagnostic probabilities in the form of DPFs is feasible to accomplish. Thus, once the methodology of this type of work has been "perfected," practice-guiding diagnostic expert systems can be developed. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Using the probability method for multigroup calculations of reactor cells in a thermal energy range

    International Nuclear Information System (INIS)

    Rubin, I.E.; Pustoshilova, V.S.

    1984-01-01

    The possibility of using the transmission probability method with performance inerpolation for determining spatial-energy neutron flux distribution in cells of thermal heterogeneous reactors is considered. The results of multigroup calculations of several uranium-water plane and cylindrical cells with different fuel enrichment in a thermal energy range are given. A high accuracy of results is obtained with low computer time consumption. The use of the transmission probability method is particularly reasonable in algorithms of the programmes compiled computer with significant reserve of internal memory

  15. EDF: Computing electron number probability distribution functions in real space from molecular wave functions

    Science.gov (United States)

    Francisco, E.; Pendás, A. Martín; Blanco, M. A.

    2008-04-01

    Given an N-electron molecule and an exhaustive partition of the real space ( R) into m arbitrary regions Ω,Ω,…,Ω ( ⋃i=1mΩ=R), the edf program computes all the probabilities P(n,n,…,n) of having exactly n electrons in Ω, n electrons in Ω,…, and n electrons ( n+n+⋯+n=N) in Ω. Each Ω may correspond to a single basin (atomic domain) or several such basins (functional group). In the later case, each atomic domain must belong to a single Ω. The program can manage both single- and multi-determinant wave functions which are read in from an aimpac-like wave function description ( .wfn) file (T.A. Keith et al., The AIMPAC95 programs, http://www.chemistry.mcmaster.ca/aimpac, 1995). For multi-determinantal wave functions a generalization of the original .wfn file has been introduced. The new format is completely backwards compatible, adding to the previous structure a description of the configuration interaction (CI) coefficients and the determinants of correlated wave functions. Besides the .wfn file, edf only needs the overlap integrals over all the atomic domains between the molecular orbitals (MO). After the P(n,n,…,n) probabilities are computed, edf obtains from them several magnitudes relevant to chemical bonding theory, such as average electronic populations and localization/delocalization indices. Regarding spin, edf may be used in two ways: with or without a splitting of the P(n,n,…,n) probabilities into α and β spin components. Program summaryProgram title: edf Catalogue identifier: AEAJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5387 No. of bytes in distributed program, including test data, etc.: 52 381 Distribution format: tar.gz Programming language: Fortran 77 Computer

  16. Meteorological evaluation of multiple reactor contamination probabilities for a Hanford Nuclear Energy Center

    International Nuclear Information System (INIS)

    Ramsdell, J.V.; Diebel, D.I.

    1978-03-01

    The conceptual Hanford energy center is composed of nuclear power plants, hence the name Hanford Nuclear Energy Center (HNEC). Previous topical reports have covered a variety of subjects related to the HNEC including: electric power transmission, fuel cycle, and heat disposal. This report discusses the probability that a radiation release from a single reactor in the HNEC would contaminate other facilities in the center. The risks, in terms of reliability of generation, of this potential contamination are examined by Clark and Dowis

  17. INVESTIGATION OF INFLUENCE OF ENCODING FUNCTION COMPLEXITY ON DISTRIBUTION OF ERROR MASKING PROBABILITY

    Directory of Open Access Journals (Sweden)

    A. B. Levina

    2016-03-01

    Full Text Available Error detection codes are mechanisms that enable robust delivery of data in unreliable communication channels and devices. Unreliable channels and devices are error-prone objects. Respectively, error detection codes allow detecting such errors. There are two classes of error detecting codes - classical codes and security-oriented codes. The classical codes have high percentage of detected errors; however, they have a high probability to miss an error in algebraic manipulation. In order, security-oriented codes are codes with a small Hamming distance and high protection to algebraic manipulation. The probability of error masking is a fundamental parameter of security-oriented codes. A detailed study of this parameter allows analyzing the behavior of the error-correcting code in the case of error injection in the encoding device. In order, the complexity of the encoding function plays an important role in the security-oriented codes. Encoding functions with less computational complexity and a low probability of masking are the best protection of encoding device against malicious acts. This paper investigates the influence of encoding function complexity on the error masking probability distribution. It will be shownthat the more complex encoding function reduces the maximum of error masking probability. It is also shown in the paper that increasing of the function complexity changes the error masking probability distribution. In particular, increasing of computational complexity decreases the difference between the maximum and average value of the error masking probability. Our resultshave shown that functions with greater complexity have smoothed maximums of error masking probability, which significantly complicates the analysis of error-correcting code by attacker. As a result, in case of complex encoding function the probability of the algebraic manipulation is reduced. The paper discusses an approach how to measure the error masking

  18. Microscopically Based Nuclear Energy Functionals

    International Nuclear Information System (INIS)

    Bogner, S. K.

    2009-01-01

    A major goal of the SciDAC project 'Building a Universal Nuclear Energy Density Functional' is to develop next-generation nuclear energy density functionals that give controlled extrapolations away from stability with improved performance across the mass table. One strategy is to identify missing physics in phenomenological Skyrme functionals based on our understanding of the underlying internucleon interactions and microscopic many-body theory. In this contribution, I describe ongoing efforts to use the density matrix expansion of Negele and Vautherin to incorporate missing finite-range effects from the underlying two- and three-nucleon interactions into phenomenological Skyrme functionals.

  19. CGC/saturation approach for soft interactions at high energy: survival probability of central exclusive production

    Energy Technology Data Exchange (ETDEWEB)

    Gotsman, E.; Maor, U. [Tel Aviv University, Department of Particle Physics, Raymond and Beverly Sackler Faculty of Exact Science, School of Physics and Astronomy, Tel Aviv (Israel); Levin, E. [Tel Aviv University, Department of Particle Physics, Raymond and Beverly Sackler Faculty of Exact Science, School of Physics and Astronomy, Tel Aviv (Israel); Universidad Tecnica Federico Santa Maria, Departemento de Fisica, Centro Cientifico-Tecnologico de Valparaiso, Valparaiso (Chile)

    2016-04-15

    We estimate the value of the survival probability for central exclusive production in a model which is based on the CGC/saturation approach. Hard and soft processes are described in the same framework. At LHC energies, we obtain a small value for the survival probability. The source of the small value is the impact parameter dependence of the hard amplitude. Our model has successfully described a large body of soft data: elastic, inelastic and diffractive cross sections, inclusive production and rapidity correlations, as well as the t-dependence of deep inelastic diffractive production of vector mesons. (orig.)

  20. Critically Evaluated Energy Levels, Spectral Lines, Transition Probabilities, and Intensities of Neutral Vanadium (V i)

    Energy Technology Data Exchange (ETDEWEB)

    Saloman, Edward B. [Dakota Consulting, Inc., 1110 Bonifant Street, Suite 310, Silver Spring, MD 20910 (United States); Kramida, Alexander [National Institute of Standards and Technology, Gaithersburg, MD 20899 (United States)

    2017-08-01

    The energy levels, observed spectral lines, and transition probabilities of the neutral vanadium atom, V i, have been compiled. Also included are values for some forbidden lines that may be of interest to the astrophysical community. Experimental Landé g -factors and leading percentage compositions for the levels are included where available, as well as wavelengths calculated from the energy levels (Ritz wavelengths). Wavelengths are reported for 3985 transitions, and 549 energy levels are determined. The observed relative intensities normalized to a common scale are provided.

  1. Modelling the Probability Density Function of IPTV Traffic Packet Delay Variation

    Directory of Open Access Journals (Sweden)

    Michal Halas

    2012-01-01

    Full Text Available This article deals with modelling the Probability density function of IPTV traffic packet delay variation. The use of this modelling is in an efficient de-jitter buffer estimation. When an IP packet travels across a network, it experiences delay and its variation. This variation is caused by routing, queueing systems and other influences like the processing delay of the network nodes. When we try to separate these at least three types of delay variation, we need a way to measure these types separately. This work is aimed to the delay variation caused by queueing systems which has the main implications to the form of the Probability density function.

  2. Decision making with consonant belief functions: Discrepancy resulting with the probability transformation method used

    Directory of Open Access Journals (Sweden)

    Cinicioglu Esma Nur

    2014-01-01

    Full Text Available Dempster−Shafer belief function theory can address a wider class of uncertainty than the standard probability theory does, and this fact appeals the researchers in operations research society for potential application areas. However, the lack of a decision theory of belief functions gives rise to the need to use the probability transformation methods for decision making. For representation of statistical evidence, the class of consonant belief functions is used which is not closed under Dempster’s rule of combination but is closed under Walley’s rule of combination. In this research, it is shown that the outcomes obtained using both Dempster’s and Walley’s rules do result in different probability distributions when pignistic transformation is used. However, when plausibility transformation is used, they do result in the same probability distribution. This result shows that the choice of the combination rule and probability transformation method may have a significant effect on decision making since it may change the choice of the decision alternative selected. This result is illustrated via an example of missile type identification.

  3. Probability tables and gauss quadrature: application to neutron cross-sections in the unresolved energy range

    International Nuclear Information System (INIS)

    Ribon, P.; Maillard, J.M.

    1986-09-01

    The idea of describing neutron cross-section fluctuations by sets of discrete values, called ''probability tables'', was formulated some 15 years ago. We propose to define the probability tables from moments by equating the moments of the actual cross-section distribution in a given energy range to the moments of the table. This definition introduces PADE approximants, orthogonal polynomials and GAUSS quadrature. This mathematical basis applies very well to the total cross-section. Some difficulties appear when partial cross-sections are taken into account, linked to the ambiguity of the definition of multivariate PADE approximants. Nevertheless we propose solutions and choices which appear to be satisfactory. Comparisons are made with other definitions of probability tables and an example of the calculation of a mixture of nuclei is given. 18 refs

  4. Probability tables and gauss quadrature: application to neutron cross-sections in the unresolved energy range

    International Nuclear Information System (INIS)

    Ribon, P.; Maillard, J.M.

    1986-01-01

    The idea of describing neutron cross-section fluctuations by sets of discrete values, called probability tables, was formulated some 15 years ago. The authors propose to define the probability tables from moments by equating the moments of the actual cross-section distribution in a given energy range to the moments of the table. This definition introduces PADE approximants, orthogonal polynomials and GAUSS quadrature. This mathematical basis applies very well to the total cross-section. Some difficulties appear when partial cross-sections are taken into account, linked to the ambiguity of the definition of multivariate PADE approximants. Nevertheless the authors propose solutions and choices which appear to be satisfactory. Comparisons are made with other definition of probability tables and an example of the calculation of a mixture of nuclei is given

  5. Estimation of functional failure probability of passive systems based on adaptive importance sampling method

    International Nuclear Information System (INIS)

    Wang Baosheng; Wang Dongqing; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to estimate the functional failure probability of passive systems, an innovative adaptive importance sampling methodology is presented. In the proposed methodology, information of variables is extracted with some pre-sampling of points in the failure region. An important sampling density is then constructed from the sample distribution in the failure region. Taking the AP1000 passive residual heat removal system as an example, the uncertainties related to the model of a passive system and the numerical values of its input parameters are considered in this paper. And then the probability of functional failure is estimated with the combination of the response surface method and adaptive importance sampling method. The numerical results demonstrate the high computed efficiency and excellent computed accuracy of the methodology compared with traditional probability analysis methods. (authors)

  6. Implementation of the probability table method in a continuous-energy Monte Carlo code system

    International Nuclear Information System (INIS)

    Sutton, T.M.; Brown, F.B.

    1998-10-01

    RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5

  7. Critically Evaluated Energy Levels, Spectral Lines, Transition Probabilities, and Intensities of Singly Ionized Vanadium (V ii)

    Energy Technology Data Exchange (ETDEWEB)

    Saloman, Edward B. [Dakota Consulting, Inc., 1110 Bonifant Street, Suite 310, Silver Spring, MD 20910 (United States); Kramida, Alexander [National Institute of Standards and Technology, Gaithersburg, MD 20899 (United States)

    2017-08-01

    The energy levels, observed spectral lines, and transition probabilities of singly ionized vanadium, V ii, have been compiled. The experimentally derived energy levels belong to the configurations 3 d {sup 4}, 3 d {sup 3} ns ( n  = 4, 5, 6), 3 d {sup 3} np , and 3 d {sup 3} nd ( n  = 4, 5), 3 d {sup 3}4 f , 3 d {sup 2}4 s {sup 2}, and 3 d {sup 2}4 s 4 p . Also included are values for some forbidden lines that may be of interest to the astrophysical community. Experimental Landé g -factors and leading percentages for the levels are included when available, as well as Ritz wavelengths calculated from the energy levels. Wavelengths and transition probabilities are reported for 3568 and 1896 transitions, respectively. From the list of observed wavelengths, 407 energy levels are determined. The observed intensities, normalized to a common scale, are provided. From the newly optimized energy levels, a revised value for the ionization energy is derived, 118,030(60) cm{sup −1}, corresponding to 14.634(7) eV. This is 130 cm{sup −1} higher than the previously recommended value from Iglesias et al.

  8. Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function

    Science.gov (United States)

    Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.

    2011-01-01

    In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.

  9. Probability density of wave function of excited photoelectron: understanding XANES features

    Czech Academy of Sciences Publication Activity Database

    Šipr, Ondřej

    2001-01-01

    Roč. 8, - (2001), s. 232-234 ISSN 0909-0495 R&D Projects: GA ČR GA202/99/0404 Institutional research plan: CEZ:A02/98:Z1-010-914 Keywords : XANES * PED - probability density of wave function Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.519, year: 2001

  10. Bounds for the probability distribution function of the linear ACD process

    OpenAIRE

    Fernandes, Marcelo

    2003-01-01

    Rio de Janeiro This paper derives both lower and upper bounds for the probability distribution function of stationary ACD(p, q) processes. For the purpose of illustration, I specialize the results to the main parent distributions in duration analysis. Simulations show that the lower bound is much tighter than the upper bound.

  11. A new formulation of the probability density function in random walk models for atmospheric dispersion

    DEFF Research Database (Denmark)

    Falk, Anne Katrine Vinther; Gryning, Sven-Erik

    1997-01-01

    In this model for atmospheric dispersion particles are simulated by the Langevin Equation, which is a stochastic differential equation. It uses the probability density function (PDF) of the vertical velocity fluctuations as input. The PDF is constructed as an expansion after Hermite polynomials...

  12. I. Fission Probabilities, Fission Barriers, and Shell Effects. II. Particle Structure Functions

    Energy Technology Data Exchange (ETDEWEB)

    Jing, Kexing [Univ. of California, Berkeley, CA (United States)

    1999-05-01

    In Part I, fission excitation functions of osmium isotopes 185,186, 187, 189 Os produced in 3He +182,183, 184, 186W reactions, and of polonium isotopes 209,210, 211, 212Po produced in 3He/4He + 206, 207, 208Pb reactions, were measured with high precision. These excitation functions have been analyzed in detail based upon the transition state formalism. The fission barriers, and shell effects for the corresponding nuclei are extracted from the detailed analyses. A novel approach has been developed to determine upper limits of the transient time of the fission process. The upper limits are constrained by the fission probabilities of neighboring isotopes. The upper limits for the transient time set with this new method are 15x 10–21 sec and 25x 10–21 sec for 0s and Po compound nuclei, respectively. In Part II, we report on a search for evidence of the optical modulations in the energy spectra of alpha particles emitted from hot compound nuclei. The optical modulations are expected to arise from the ~-particle interaction with the rest of the nucleus as the particle prepares to exit. Some evidence for the modulations has been observed in the alpha spectra measured in the 3He-induced reactions, 3He + natAg in particular. The identification of the modulations involves a technique that subtracts the bulk statistical background from the measured alpha spectra, in order for the modulations to become visible in the residuals. Due to insufficient knowledge of the background spectra, however, the presented evidence should only be regarded as preliminary and tentative.

  13. Probabilities and energies to obtain the counting efficiency of electron-capture nuclides, KLMN model

    International Nuclear Information System (INIS)

    Casas Galiano, G.; Grau Malonda, A.

    1994-01-01

    An intelligent computer program has been developed to obtain the mathematical formulae to compute the probabilities and reduced energies of the different atomic rearrangement pathways following electron-capture decay. Creation and annihilation operators for Auger and X processes have been introduced. Taking into account the symmetries associated with each process, 262 different pathways were obtained. This model allows us to obtain the influence of the M-electron-capture in the counting efficiency when the atomic number of the nuclide is high

  14. Probability distributions in conservative energy exchange models of multiple interacting agents

    International Nuclear Information System (INIS)

    Scafetta, Nicola; West, Bruce J

    2007-01-01

    Herein we study energy exchange models of multiple interacting agents that conserve energy in each interaction. The models differ regarding the rules that regulate the energy exchange and boundary effects. We find a variety of stochastic behaviours that manifest energy equilibrium probability distributions of different types and interaction rules that yield not only the exponential distributions such as the familiar Maxwell-Boltzmann-Gibbs distribution of an elastically colliding ideal particle gas, but also uniform distributions, truncated exponential distributions, Gaussian distributions, Gamma distributions, inverse power law distributions, mixed exponential and inverse power law distributions, and evolving distributions. This wide variety of distributions should be of value in determining the underlying mechanisms generating the statistical properties of complex phenomena including those to be found in complex chemical reactions

  15. Theoretical determination of gamma spectrometry systems efficiency based on probability functions. Application to self-attenuation correction factors

    Energy Technology Data Exchange (ETDEWEB)

    Barrera, Manuel, E-mail: manuel.barrera@uca.es [Escuela Superior de Ingeniería, University of Cadiz, Avda, Universidad de Cadiz 10, 11519 Puerto Real, Cadiz (Spain); Suarez-Llorens, Alfonso [Facultad de Ciencias, University of Cadiz, Avda, Rep. Saharaui s/n, 11510 Puerto Real, Cadiz (Spain); Casas-Ruiz, Melquiades; Alonso, José J.; Vidal, Juan [CEIMAR, University of Cadiz, Avda, Rep. Saharaui s/n, 11510 Puerto Real, Cádiz (Spain)

    2017-05-11

    A generic theoretical methodology for the calculation of the efficiency of gamma spectrometry systems is introduced in this work. The procedure is valid for any type of source and detector and can be applied to determine the full energy peak and the total efficiency of any source-detector system. The methodology is based on the idea of underlying probability of detection, which describes the physical model for the detection of the gamma radiation at the particular studied situation. This probability depends explicitly on the direction of the gamma radiation, allowing the use of this dependence the development of more realistic and complex models than the traditional models based on the point source integration. The probability function that has to be employed in practice must reproduce the relevant characteristics of the detection process occurring at the particular studied situation. Once the probability is defined, the efficiency calculations can be performed in general by using numerical methods. Monte Carlo integration procedure is especially useful to perform the calculations when complex probability functions are used. The methodology can be used for the direct determination of the efficiency and also for the calculation of corrections that require this determination of the efficiency, as it is the case of coincidence summing, geometric or self-attenuation corrections. In particular, we have applied the procedure to obtain some of the classical self-attenuation correction factors usually employed to correct for the sample attenuation of cylindrical geometry sources. The methodology clarifies the theoretical basis and approximations associated to each factor, by making explicit the probability which is generally hidden and implicit to each model. It has been shown that most of these self-attenuation correction factors can be derived by using a common underlying probability, having this probability a growing level of complexity as it reproduces more precisely

  16. ERG review of containment failure probability and repository functional design criteria

    International Nuclear Information System (INIS)

    Gopal, S.

    1986-06-01

    The Engineering Review Group (ERG) was established by the Office of Nuclear Waste Isolation (ONWI) to help evaluate engineering-related issues in the US Department of Energy's nuclear waste repository program. The June 1984 meeting of the ERG considered two topics: (1) statistical probability for containment of nuclides within the waste package and (2) repository design criteria. This report documents the ERG's comments and recommendations on these two subjects and the ONWI response to the specific points raised by ERG

  17. Theoretical Study of Energy Levels and Transition Probabilities of Boron Atom

    Science.gov (United States)

    Tian Yi, Zhang; Neng Wu, Zheng

    2009-08-01

    Full Text PDF Though the electrons configuration for boron atom is simple and boron atom has long been of interest for many researchers, the theoretical studies for properties of BI are not systematic, there are only few results reported on energy levels of high excited states of boron, and transition measurements are generally restricted to transitions involving ground states and low excited states without considering fine structure effects, provided only multiplet results, values for transitions between high excited states are seldom performed. In this article, by using the scheme of the weakest bound electron potential model theory calculations for energy levels of five series are performed and with the same method we give the transition probabilities between excited states with considering fine structure effects. The comprehensive set of calculations attempted in this paper could be of some value to workers in the field because of the lack of published calculations for the BI systems. The perturbations coming from foreign perturbers are taken into account in studying the energy levels. Good agreement between our results and the accepted values taken from NIST has been obtained. We also reported some values of energy levels and transition probabilities not existing on the NIST data bases.

  18. Outage Probability Analysis in Power-Beacon Assisted Energy Harvesting Cognitive Relay Wireless Networks

    Directory of Open Access Journals (Sweden)

    Ngoc Phuc Le

    2017-01-01

    Full Text Available We study the performance of the secondary relay system in a power-beacon (PB assisted energy harvesting cognitive relay wireless network. In our system model, a secondary source node and a relay node first harvest energy from distributed PBs. Then, the source node transmits its data to the destination node with the help of the relay node. Also, fading coefficients of the links from the PBs to the source node and relay node are assumed independent but not necessarily identically distributed (i.n.i.d Nakagami-m random variables. We derive exact expressions for the power outage probability and the channel outage probability. Based on that, we analyze the total outage probability of the secondary relay system. Asymptotic analysis is also performed, which provides insights into the system behavior. Moreover, we evaluate impacts of the primary network on the performance of the secondary network with respect to the tolerant interference threshold at the primary receiver as well as the interference introduced by the primary transmitter at the secondary source and relay nodes. Simulation results are provided to validate the analysis.

  19. Probability density function method for variable-density pressure-gradient-driven turbulence and mixing

    International Nuclear Information System (INIS)

    Bakosi, Jozsef; Ristorcelli, Raymond J.

    2010-01-01

    Probability density function (PDF) methods are extended to variable-density pressure-gradient-driven turbulence. We apply the new method to compute the joint PDF of density and velocity in a non-premixed binary mixture of different-density molecularly mixing fluids under gravity. The full time-evolution of the joint PDF is captured in the highly non-equilibrium flow: starting from a quiescent state, transitioning to fully developed turbulence and finally dissipated by molecular diffusion. High-Atwood-number effects (as distinguished from the Boussinesq case) are accounted for: both hydrodynamic turbulence and material mixing are treated at arbitrary density ratios, with the specific volume, mass flux and all their correlations in closed form. An extension of the generalized Langevin model, originally developed for the Lagrangian fluid particle velocity in constant-density shear-driven turbulence, is constructed for variable-density pressure-gradient-driven flows. The persistent small-scale anisotropy, a fundamentally 'non-Kolmogorovian' feature of flows under external acceleration forces, is captured by a tensorial diffusion term based on the external body force. The material mixing model for the fluid density, an active scalar, is developed based on the beta distribution. The beta-PDF is shown to be capable of capturing the mixing asymmetry and that it can accurately represent the density through transition, in fully developed turbulence and in the decay process. The joint model for hydrodynamics and active material mixing yields a time-accurate evolution of the turbulent kinetic energy and Reynolds stress anisotropy without resorting to gradient diffusion hypotheses, and represents the mixing state by the density PDF itself, eliminating the need for dubious mixing measures. Direct numerical simulations of the homogeneous Rayleigh-Taylor instability are used for model validation.

  20. Probability distribution for the Gaussian curvature of the zero level surface of a random function

    Science.gov (United States)

    Hannay, J. H.

    2018-04-01

    A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z)  =  0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f  =  0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.

  1. Electron-trapping probability in natural dosemeters as a function of irradiation temperature

    DEFF Research Database (Denmark)

    Wallinga, J.; Murray, A.S.; Wintle, A.G.

    2002-01-01

    The electron-trapping probability in OSL traps as a function of irradiation temperature is investigated for sedimentary quartz and feldspar. A dependency was found for both minerals; this phenomenon could give rise to errors in dose estimation when the irradiation temperature used in laboratory...... procedures is different from that in the natural environment. No evidence was found for the existence of shallow trap saturation effects that Could give rise to a dose-rate dependency of electron trapping....

  2. Probability density function modeling of scalar mixing from concentrated sources in turbulent channel flow

    OpenAIRE

    Bakosi, J.; Franzese, P.; Boybeyi, Z.

    2010-01-01

    Dispersion of a passive scalar from concentrated sources in fully developed turbulent channel flow is studied with the probability density function (PDF) method. The joint PDF of velocity, turbulent frequency and scalar concentration is represented by a large number of Lagrangian particles. A stochastic near-wall PDF model combines the generalized Langevin model of Haworth & Pope with Durbin's method of elliptic relaxation to provide a mathematically exact treatment of convective and viscous ...

  3. Dictionary-Based Stochastic Expectation–Maximization for SAR Amplitude Probability Density Function Estimation

    OpenAIRE

    Moser , Gabriele; Zerubia , Josiane; Serpico , Sebastiano B.

    2006-01-01

    International audience; In remotely sensed data analysis, a crucial problem is represented by the need to develop accurate models for the statistics of the pixel intensities. This paper deals with the problem of probability density function (pdf) estimation in the context of synthetic aperture radar (SAR) amplitude data analysis. Several theoretical and heuristic models for the pdfs of SAR data have been proposed in the literature, which have been proved to be effective for different land-cov...

  4. Off-critical local height probabilities on a plane and critical partition functions on a cylinder

    Directory of Open Access Journals (Sweden)

    Omar Foda

    2018-03-01

    Full Text Available We compute off-critical local height probabilities in regime-III restricted solid-on-solid models in a 4N-quadrant spiral geometry, with periodic boundary conditions in the angular direction, and fixed boundary conditions in the radial direction, as a function of N, the winding number of the spiral, and τ, the departure from criticality of the model, and observe that the result depends only on the product Nτ. In the limit N→1, τ→τ0, such that τ0 is finite, we recover the off-critical local height probability on a plane, τ0-away from criticality. In the limit N→∞, τ→0, such that Nτ=τ0 is finite, and following a conformal transformation, we obtain a critical partition function on a cylinder of aspect-ratio τ0. We conclude that the off-critical local height probability on a plane, τ0-away from criticality, is equal to a critical partition function on a cylinder of aspect-ratio τ0, in agreement with a result of Saleur and Bauer.

  5. The H-Function and Probability Density Functions of Certain Algebraic Combinations of Independent Random Variables with H-Function Probability Distribution

    Science.gov (United States)

    1981-05-01

    Education, 10 (2), A45-A49 (1976). 48. Rain&, R. K., and C. L. Kaul (Koul), "Some inequalities involving the Fox’s H- function," Proceedings of the Indian...1973). 51. Srivastava , A., and K. C. Gupta, "On certain recurrence rela- tions," Mathematische Nachrichten, 46, 13- 23 (1970), 49, 187- 197 (1971). 52...34 Vilnana Parishad Anusandhan Patrika, 10, 205- 217 (1967). 69. Gupta, K. C., and A. Srivastava , "On finite expansions for the H- function," Indian Journal

  6. Energies, wavelengths, and transition probabilities for Ge-like Kr, Mo, Sn, and Xe ions

    International Nuclear Information System (INIS)

    Nagy, O.; El Sayed, Fatma

    2012-01-01

    Energy levels, wavelengths, transition probabilities, and oscillator strengths have been calculated for Ge-like Kr, Mo, Sn, and Xe ions among the fine-structure levels of terms belonging to the ([Ar] 3d 10 )4s 2 4p 2 , ([Ar] 3d 10 )4s 4p 3 , ([Ar] 3d 10 )4s 2 4p 4d, and ([Ar] 3d 10 )4p 4 configurations. The fully relativistic multiconfiguration Dirac–Fock method, taking both correlations within the n=4 complex and the quantum electrodynamic effects into account, have been used in the calculations. The results are compared with the available experimental and other theoretical results.

  7. Probabilities and energies to obtain the counting efficiency of electron-capture nuclides. KLMN model

    International Nuclear Information System (INIS)

    Galiano, G.; Grau, A.

    1994-01-01

    An intelligent computer program has been developed to obtain the mathematical formulae to compute the probabilities and reduced energies of the different atomic rearrangement pathways following electron-capture decay. Creation and annihilation operators for Auger and X processes have been introduced. Taking into account the symmetries associated with each process, 262 different pathways were obtained. This model allows us to obtain the influence of the M-electro capture in the counting efficiency when the atomic number of the nuclide is high. (Author)

  8. Energy functions for regularization algorithms

    Science.gov (United States)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  9. Fine-structure energy levels, oscillator strengths and transition probabilities in Ni XVI

    International Nuclear Information System (INIS)

    Deb, N.C.; Msezane, A.Z.

    2001-01-01

    Fine-structure energy levels relative to the ground state, oscillator strengths and transition probabilities for transitions among the lowest 40 fine-structure levels belonging to the configurations 3s 2 3p, 3s3p 2 , 3s 2 3d, 3p 3 and 3s3p3d of Ni XVI are calculated using a large scale CI in program CIV3 of Hibbert. Relativistic effects are included through the Breit-Pauli approximation via spin-orbit, spin-other-orbit, spin-spin, Darwin and mass correction terms. The existing discrepancies between the calculated and measured values for many of the relative energy positions are resolved in the present calculation which yields excellent agreement with measurement. Also, many of our oscillator strengths for allowed and intercombination transitions are in very good agreement with the recommended data by the National Institute of Standard and Technology (NIST). (orig.)

  10. Effect of energy level sequences and neutron–proton interaction on α-particle preformation probability

    International Nuclear Information System (INIS)

    Ismail, M.; Adel, A.

    2013-01-01

    A realistic density-dependent nucleon–nucleon (NN) interaction with finite-range exchange part which produces the nuclear matter saturation curve and the energy dependence of the nucleon–nucleus optical model potential is used to calculate the preformation probability, S α , of α-decay from different isotones with neutron numbers N=124,126,128,130 and 132. We studied the variation of S α with the proton number, Z, for each isotone and found the effect of neutron and proton energy levels of parent nuclei on the behavior of the α-particle preformation probability. We found that S α increases regularly with the proton number when the proton pair in α-particle is emitted from the same level and the neutron level sequence is not changed during the Z-variation. In this case the neutron–proton (n–p) interaction of the two levels, contributing to emission process, is too small. On the contrary, if the proton or neutron level sequence is changed during the emission process, S α behaves irregularly, the irregular behavior increases if both proton and neutron levels are changed. This behavior is accompanied by change or rapid increase in the strength of n–p interaction

  11. Wave functions and two-electron probability distributions of the Hooke's-law atom and helium

    International Nuclear Information System (INIS)

    O'Neill, Darragh P.; Gill, Peter M. W.

    2003-01-01

    The Hooke's-law atom (hookium) provides an exactly soluble model for a two-electron atom in which the nuclear-electron Coulombic attraction has been replaced by a harmonic one. Starting from the known exact position-space wave function for the ground state of hookium, we present the momentum-space wave function. We also look at the intracules, two-electron probability distributions, for hookium in position, momentum, and phase space. These are compared with the Hartree-Fock results and the Coulomb holes (the difference between the exact and Hartree-Fock intracules) in position, momentum, and phase space are examined. We then compare these results with analogous results for the ground state of helium using a simple, explicitly correlated wave function

  12. Updated greenhouse gas and criteria air pollutant emission factors and their probability distribution functions for electricity generating units

    International Nuclear Information System (INIS)

    Cai, H.; Wang, M.; Elgowainy, A.; Han, J.

    2012-01-01

    Greenhouse gas (CO 2 , CH 4 and N 2 O, hereinafter GHG) and criteria air pollutant (CO, NO x , VOC, PM 10 , PM 2.5 and SO x , hereinafter CAP) emission factors for various types of power plants burning various fuels with different technologies are important upstream parameters for estimating life-cycle emissions associated with alternative vehicle/fuel systems in the transportation sector, especially electric vehicles. The emission factors are typically expressed in grams of GHG or CAP per kWh of electricity generated by a specific power generation technology. This document describes our approach for updating and expanding GHG and CAP emission factors in the GREET (Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation) model developed at Argonne National Laboratory (see Wang 1999 and the GREET website at http://greet.es.anl.gov/main) for various power generation technologies. These GHG and CAP emissions are used to estimate the impact of electricity use by stationary and transportation applications on their fuel-cycle emissions. The electricity generation mixes and the fuel shares attributable to various combustion technologies at the national, regional and state levels are also updated in this document. The energy conversion efficiencies of electric generating units (EGUs) by fuel type and combustion technology are calculated on the basis of the lower heating values of each fuel, to be consistent with the basis used in GREET for transportation fuels. On the basis of the updated GHG and CAP emission factors and energy efficiencies of EGUs, the probability distribution functions (PDFs), which are functions that describe the relative likelihood for the emission factors and energy efficiencies as random variables to take on a given value by the integral of their own probability distributions, are updated using best-fit statistical curves to characterize the uncertainties associated with GHG and CAP emissions in life-cycle modeling with GREET.

  13. Updated greenhouse gas and criteria air pollutant emission factors and their probability distribution functions for electricity generating units

    Energy Technology Data Exchange (ETDEWEB)

    Cai, H.; Wang, M.; Elgowainy, A.; Han, J. (Energy Systems)

    2012-07-06

    Greenhouse gas (CO{sub 2}, CH{sub 4} and N{sub 2}O, hereinafter GHG) and criteria air pollutant (CO, NO{sub x}, VOC, PM{sub 10}, PM{sub 2.5} and SO{sub x}, hereinafter CAP) emission factors for various types of power plants burning various fuels with different technologies are important upstream parameters for estimating life-cycle emissions associated with alternative vehicle/fuel systems in the transportation sector, especially electric vehicles. The emission factors are typically expressed in grams of GHG or CAP per kWh of electricity generated by a specific power generation technology. This document describes our approach for updating and expanding GHG and CAP emission factors in the GREET (Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation) model developed at Argonne National Laboratory (see Wang 1999 and the GREET website at http://greet.es.anl.gov/main) for various power generation technologies. These GHG and CAP emissions are used to estimate the impact of electricity use by stationary and transportation applications on their fuel-cycle emissions. The electricity generation mixes and the fuel shares attributable to various combustion technologies at the national, regional and state levels are also updated in this document. The energy conversion efficiencies of electric generating units (EGUs) by fuel type and combustion technology are calculated on the basis of the lower heating values of each fuel, to be consistent with the basis used in GREET for transportation fuels. On the basis of the updated GHG and CAP emission factors and energy efficiencies of EGUs, the probability distribution functions (PDFs), which are functions that describe the relative likelihood for the emission factors and energy efficiencies as random variables to take on a given value by the integral of their own probability distributions, are updated using best-fit statistical curves to characterize the uncertainties associated with GHG and CAP emissions in life

  14. Calculation of probability density functions for temperature and precipitation change under global warming

    International Nuclear Information System (INIS)

    Watterson, Ian G.

    2007-01-01

    Full text: he IPCC Fourth Assessment Report (Meehl ef al. 2007) presents multi-model means of the CMIP3 simulations as projections of the global climate change over the 21st century under several SRES emission scenarios. To assess the possible range of change for Australia based on the CMIP3 ensemble, we can follow Whetton etal. (2005) and use the 'pattern scaling' approach, which separates the uncertainty in the global mean warming from that in the local change per degree of warming. This study presents several ways of representing these two factors as probability density functions (PDFs). The beta distribution, a smooth, bounded, function allowing skewness, is found to provide a useful representation of the range of CMIP3 results. A weighting of models based on their skill in simulating seasonal means in the present climate over Australia is included. Dessai ef al. (2005) and others have used Monte-Carlo sampling to recombine such global warming and scaled change factors into values of net change. Here, we use a direct integration of the product across the joint probability space defined by the two PDFs. The result is a cumulative distribution function (CDF) for change, for each variable, location, and season. The median of this distribution provides a best estimate of change, while the 10th and 90th percentiles represent a likely range. The probability of exceeding a specified threshold can also be extracted from the CDF. The presentation focuses on changes in Australian temperature and precipitation at 2070 under the A1B scenario. However, the assumption of linearity behind pattern scaling allows results for different scenarios and times to be simply obtained. In the case of precipitation, which must remain non-negative, a simple modification of the calculations (based on decreases being exponential with warming) is used to avoid unrealistic results. These approaches are currently being used for the new CSIRO/ Bureau of Meteorology climate projections

  15. Minimal nuclear energy density functional

    Science.gov (United States)

    Bulgac, Aurel; Forbes, Michael McNeil; Jin, Shi; Perez, Rodrigo Navarro; Schunck, Nicolas

    2018-04-01

    We present a minimal nuclear energy density functional (NEDF) called "SeaLL1" that has the smallest number of possible phenomenological parameters to date. SeaLL1 is defined by seven significant phenomenological parameters, each related to a specific nuclear property. It describes the nuclear masses of even-even nuclei with a mean energy error of 0.97 MeV and a standard deviation of 1.46 MeV , two-neutron and two-proton separation energies with rms errors of 0.69 MeV and 0.59 MeV respectively, and the charge radii of 345 even-even nuclei with a mean error ɛr=0.022 fm and a standard deviation σr=0.025 fm . SeaLL1 incorporates constraints on the equation of state (EoS) of pure neutron matter from quantum Monte Carlo calculations with chiral effective field theory two-body (NN ) interactions at the next-to-next-to-next-to leading order (N3LO) level and three-body (NNN ) interactions at the next-to-next-to leading order (N2LO) level. Two of the seven parameters are related to the saturation density and the energy per particle of the homogeneous symmetric nuclear matter, one is related to the nuclear surface tension, two are related to the symmetry energy and its density dependence, one is related to the strength of the spin-orbit interaction, and one is the coupling constant of the pairing interaction. We identify additional phenomenological parameters that have little effect on ground-state properties but can be used to fine-tune features such as the Thomas-Reiche-Kuhn sum rule, the excitation energy of the giant dipole and Gamow-Teller resonances, the static dipole electric polarizability, and the neutron skin thickness.

  16. Exact joint density-current probability function for the asymmetric exclusion process.

    Science.gov (United States)

    Depken, Martin; Stinchcombe, Robin

    2004-07-23

    We study the asymmetric simple exclusion process with open boundaries and derive the exact form of the joint probability function for the occupation number and the current through the system. We further consider the thermodynamic limit, showing that the resulting distribution is non-Gaussian and that the density fluctuations have a discontinuity at the continuous phase transition, while the current fluctuations are continuous. The derivations are performed by using the standard operator algebraic approach and by the introduction of new operators satisfying a modified version of the original algebra. Copyright 2004 The American Physical Society

  17. Charged-particle thermonuclear reaction rates: II. Tables and graphs of reaction rates and probability density functions

    International Nuclear Information System (INIS)

    Iliadis, C.; Longland, R.; Champagne, A.E.; Coc, A.; Fitzgerald, R.

    2010-01-01

    Numerical values of charged-particle thermonuclear reaction rates for nuclei in the A=14 to 40 region are tabulated. The results are obtained using a method, based on Monte Carlo techniques, that has been described in the preceding paper of this issue (Paper I). We present a low rate, median rate and high rate which correspond to the 0.16, 0.50 and 0.84 quantiles, respectively, of the cumulative reaction rate distribution. The meaning of these quantities is in general different from the commonly reported, but statistically meaningless expressions, 'lower limit', 'nominal value' and 'upper limit' of the total reaction rate. In addition, we approximate the Monte Carlo probability density function of the total reaction rate by a lognormal distribution and tabulate the lognormal parameters μ and σ at each temperature. We also provide a quantitative measure (Anderson-Darling test statistic) for the reliability of the lognormal approximation. The user can implement the approximate lognormal reaction rate probability density functions directly in a stellar model code for studies of stellar energy generation and nucleosynthesis. For each reaction, the Monte Carlo reaction rate probability density functions, together with their lognormal approximations, are displayed graphically for selected temperatures in order to provide a visual impression. Our new reaction rates are appropriate for bare nuclei in the laboratory. The nuclear physics input used to derive our reaction rates is presented in the subsequent paper of this issue (Paper III). In the fourth paper of this issue (Paper IV) we compare our new reaction rates to previous results.

  18. Noise-level determination for discrete spectra with Gaussian or Lorentzian probability density functions

    International Nuclear Information System (INIS)

    Moriya, Netzer

    2010-01-01

    A method, based on binomial filtering, to estimate the noise level of an arbitrary, smoothed pure signal, contaminated with an additive, uncorrelated noise component is presented. If the noise characteristics of the experimental spectrum are known, as for instance the type of the corresponding probability density function (e.g., Gaussian), the noise properties can be extracted. In such cases, both the noise level, as may arbitrarily be defined, and a simulated white noise component can be generated, such that the simulated noise component is statistically indistinguishable from the true noise component present in the original signal. In this paper we present a detailed analysis of the noise level extraction when the additive noise is Gaussian or Lorentzian. We show that the statistical parameters in these cases (mainly the variance and the half width at half maximum, respectively) can directly be obtained from the experimental spectrum even when the pure signal is erratic. Further discussion is given for cases where the noise probability density function is initially unknown.

  19. Audio Query by Example Using Similarity Measures between Probability Density Functions of Features

    Directory of Open Access Journals (Sweden)

    Marko Helén

    2010-01-01

    Full Text Available This paper proposes a query by example system for generic audio. We estimate the similarity of the example signal and the samples in the queried database by calculating the distance between the probability density functions (pdfs of their frame-wise acoustic features. Since the features are continuous valued, we propose to model them using Gaussian mixture models (GMMs or hidden Markov models (HMMs. The models parametrize each sample efficiently and retain sufficient information for similarity measurement. To measure the distance between the models, we apply a novel Euclidean distance, approximations of Kullback-Leibler divergence, and a cross-likelihood ratio test. The performance of the measures was tested in simulations where audio samples are automatically retrieved from a general audio database, based on the estimated similarity to a user-provided example. The simulations show that the distance between probability density functions is an accurate measure for similarity. Measures based on GMMs or HMMs are shown to produce better results than that of the existing methods based on simpler statistics or histograms of the features. A good performance with low computational cost is obtained with the proposed Euclidean distance.

  20. Using Prediction Markets to Generate Probability Density Functions for Climate Change Risk Assessment

    Science.gov (United States)

    Boslough, M.

    2011-12-01

    Climate-related uncertainty is traditionally presented as an error bar, but it is becoming increasingly common to express it in terms of a probability density function (PDF). PDFs are a necessary component of probabilistic risk assessments, for which simple "best estimate" values are insufficient. Many groups have generated PDFs for climate sensitivity using a variety of methods. These PDFs are broadly consistent, but vary significantly in their details. One axiom of the verification and validation community is, "codes don't make predictions, people make predictions." This is a statement of the fact that subject domain experts generate results using assumptions within a range of epistemic uncertainty and interpret them according to their expert opinion. Different experts with different methods will arrive at different PDFs. For effective decision support, a single consensus PDF would be useful. We suggest that market methods can be used to aggregate an ensemble of opinions into a single distribution that expresses the consensus. Prediction markets have been shown to be highly successful at forecasting the outcome of events ranging from elections to box office returns. In prediction markets, traders can take a position on whether some future event will or will not occur. These positions are expressed as contracts that are traded in a double-action market that aggregates price, which can be interpreted as a consensus probability that the event will take place. Since climate sensitivity cannot directly be measured, it cannot be predicted. However, the changes in global mean surface temperature are a direct consequence of climate sensitivity, changes in forcing, and internal variability. Viable prediction markets require an undisputed event outcome on a specific date. Climate-related markets exist on Intrade.com, an online trading exchange. One such contract is titled "Global Temperature Anomaly for Dec 2011 to be greater than 0.65 Degrees C." Settlement is based

  1. Probability distribution functions for intermittent scrape-off layer plasma fluctuations

    Science.gov (United States)

    Theodorsen, A.; Garcia, O. E.

    2018-03-01

    A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.

  2. Skin damage probabilities using fixation materials in high-energy photon beams

    International Nuclear Information System (INIS)

    Carl, J.; Vestergaard, A.

    2000-01-01

    Patient fixation, such as thermoplastic masks, carbon-fibre support plates and polystyrene bead vacuum cradles, is used to reproduce patient positioning in radiotherapy. Consequently low-density materials may be introduced in high-energy photon beams. The aim of the this study was to measure the increase in skin dose when low-density materials are present and calculate the radiobiological consequences in terms of probabilities of early and late skin damage. An experimental thin-windowed plane-parallel ion chamber was used. Skin doses were measured using various overlaying low-density fixation materials. A fixed geometry of a 10 x 10 cm field, a SSD = 100 cm and photon energies of 4, 6 and 10 MV on Varian Clinac 2100C accelerators were used for all measurements. Radiobiological consequences of introducing these materials into the high-energy photon beams were evaluated in terms of early and late damage of the skin based on the measured surface doses and the LQ-model. The experimental ion chamber save results consistent with other studies. A relationship between skin dose and material thickness in mg/cm 2 was established and used to calculate skin doses in scenarios assuming radiotherapy treatment with opposed fields. Conventional radiotherapy may apply mid-point doses up to 60-66 Gy in daily 2-Gy fractions opposed fields. Using thermoplastic fixation and high-energy photons as low as 4 MV do increase the dose to the skin considerably. However, using thermoplastic materials with thickness less than 100 mg/cm 2 skin doses are comparable with those produced by variation in source to skin distance, field size or blocking trays within clinical treatment set-ups. The use of polystyrene cradles and carbon-fibre materials with thickness less than 100 mg/cm 2 should be avoided at 4 MV at doses above 54-60 Gy. (author)

  3. Development and evaluation of probability density functions for a set of human exposure factors

    Energy Technology Data Exchange (ETDEWEB)

    Maddalena, R.L.; McKone, T.E.; Bodnar, A.; Jacobson, J.

    1999-06-01

    The purpose of this report is to describe efforts carried out during 1998 and 1999 at the Lawrence Berkeley National Laboratory to assist the U.S. EPA in developing and ranking the robustness of a set of default probability distributions for exposure assessment factors. Among the current needs of the exposure-assessment community is the need to provide data for linking exposure, dose, and health information in ways that improve environmental surveillance, improve predictive models, and enhance risk assessment and risk management (NAS, 1994). The U.S. Environmental Protection Agency (EPA) Office of Emergency and Remedial Response (OERR) plays a lead role in developing national guidance and planning future activities that support the EPA Superfund Program. OERR is in the process of updating its 1989 Risk Assessment Guidance for Superfund (RAGS) as part of the EPA Superfund reform activities. Volume III of RAGS, when completed in 1999 will provide guidance for conducting probabilistic risk assessments. This revised document will contain technical information including probability density functions (PDFs) and methods used to develop and evaluate these PDFs. The PDFs provided in this EPA document are limited to those relating to exposure factors.

  4. Development and evaluation of probability density functions for a set of human exposure factors

    International Nuclear Information System (INIS)

    Maddalena, R.L.; McKone, T.E.; Bodnar, A.; Jacobson, J.

    1999-01-01

    The purpose of this report is to describe efforts carried out during 1998 and 1999 at the Lawrence Berkeley National Laboratory to assist the U.S. EPA in developing and ranking the robustness of a set of default probability distributions for exposure assessment factors. Among the current needs of the exposure-assessment community is the need to provide data for linking exposure, dose, and health information in ways that improve environmental surveillance, improve predictive models, and enhance risk assessment and risk management (NAS, 1994). The U.S. Environmental Protection Agency (EPA) Office of Emergency and Remedial Response (OERR) plays a lead role in developing national guidance and planning future activities that support the EPA Superfund Program. OERR is in the process of updating its 1989 Risk Assessment Guidance for Superfund (RAGS) as part of the EPA Superfund reform activities. Volume III of RAGS, when completed in 1999 will provide guidance for conducting probabilistic risk assessments. This revised document will contain technical information including probability density functions (PDFs) and methods used to develop and evaluate these PDFs. The PDFs provided in this EPA document are limited to those relating to exposure factors

  5. Technical report. The application of probability-generating functions to linear-quadratic radiation survival curves.

    Science.gov (United States)

    Kendal, W S

    2000-04-01

    To illustrate how probability-generating functions (PGFs) can be employed to derive a simple probabilistic model for clonogenic survival after exposure to ionizing irradiation. Both repairable and irreparable radiation damage to DNA were assumed to occur by independent (Poisson) processes, at intensities proportional to the irradiation dose. Also, repairable damage was assumed to be either repaired or further (lethally) injured according to a third (Bernoulli) process, with the probability of lethal conversion being directly proportional to dose. Using the algebra of PGFs, these three processes were combined to yield a composite PGF that described the distribution of lethal DNA lesions in irradiated cells. The composite PGF characterized a Poisson distribution with mean, chiD+betaD2, where D was dose and alpha and beta were radiobiological constants. This distribution yielded the conventional linear-quadratic survival equation. To test the composite model, the derived distribution was used to predict the frequencies of multiple chromosomal aberrations in irradiated human lymphocytes. The predictions agreed well with observation. This probabilistic model was consistent with single-hit mechanisms, but it was not consistent with binary misrepair mechanisms. A stochastic model for radiation survival has been constructed from elementary PGFs that exactly yields the linear-quadratic relationship. This approach can be used to investigate other simple probabilistic survival models.

  6. Examining the association between male circumcision and sexual function: evidence from a British probability survey.

    Science.gov (United States)

    Homfray, Virginia; Tanton, Clare; Mitchell, Kirstin R; Miller, Robert F; Field, Nigel; Macdowall, Wendy; Wellings, Kaye; Sonnenberg, Pam; Johnson, Anne M; Mercer, Catherine H

    2015-07-17

    Despite biological advantages of male circumcision in reducing HIV/sexually transmitted infection acquisition, concern is often expressed that it may reduce sexual enjoyment and function. We examine the association between circumcision and sexual function among sexually active men in Britain using data from Britain's third National Survey of Sexual Attitudes and Lifestyles (Natsal-3). Natsal-3 asked about circumcision and included a validated measure of sexual function, the Natsal-SF, which takes into account not only sexual difficulties but also the relationship context and overall level of satisfaction. A stratified probability survey of 6293 men and 8869 women aged 16-74 years, resident in Britain, undertaken 2010-2012, using computer-assisted face-to-face interviewing with computer-assisted self-interview for the more sensitive questions. Logistic regression was used to calculate odds ratios (ORs) to examine the association between reporting male circumcision and aspects of sexual function among sexually active men (n = 4816). The prevalence of male circumcision in Britain was 20.7% [95% confidence interval (CI): 19.3-21.8]. There was no association between male circumcision and, being in the lowest quintile of scores for the Natsal-SF, an indicator of poorer sexual function (adjusted OR: 0.95, 95% CI: 0.76-1.18). Circumcised men were as likely as uncircumcised men to report the specific sexual difficulties asked about in Natsal-3, except that a larger proportion of circumcised men reported erectile difficulties. This association was of borderline statistical significance after adjusting for age and relationship status (adjusted OR: 1.27, 95% CI: 0.99-1.63). Data from a large, nationally representative British survey suggest that circumcision is not associated with men's overall sexual function at a population level.

  7. Probability density functions of photochemicals over a coastal area of Northern Italy

    International Nuclear Information System (INIS)

    Georgiadis, T.; Fortezza, F.; Alberti, L.; Strocchi, V.; Marani, A.; Dal Bo', G.

    1998-01-01

    The present paper surveys the findings of experimental studies and analyses of statistical probability density functions (PDFs) applied to air pollutant concentrations to provide an interpretation of the ground-level distributions of photochemical oxidants in the coastal area of Ravenna (Italy). The atmospheric-pollution data set was collected from the local environmental monitoring network for the period 1978-1989. Results suggest that the statistical distribution of surface ozone, once normalised over the solar radiation PDF for the whole measurement period, follows a log-normal law as found for other pollutants. Although the Weibull distribution also offers a good fit of the experimental data, the area's meteorological features seem to favour the former distribution once the statistical index estimates have been analysed. Local transport phenomena are discussed to explain the data tail trends

  8. Stochastic chaos induced by diffusion processes with identical spectral density but different probability density functions.

    Science.gov (United States)

    Lei, Youming; Zheng, Fan

    2016-12-01

    Stochastic chaos induced by diffusion processes, with identical spectral density but different probability density functions (PDFs), is investigated in selected lightly damped Hamiltonian systems. The threshold amplitude of diffusion processes for the onset of chaos is derived by using the stochastic Melnikov method together with a mean-square criterion. Two quasi-Hamiltonian systems, namely, a damped single pendulum and damped Duffing oscillator perturbed by stochastic excitations, are used as illustrative examples. Four different cases of stochastic processes are taking as the driving excitations. It is shown that in such two systems the spectral density of diffusion processes completely determines the threshold amplitude for chaos, regardless of the shape of their PDFs, Gaussian or otherwise. Furthermore, the mean top Lyapunov exponent is employed to verify analytical results. The results obtained by numerical simulations are in accordance with the analytical results. This demonstrates that the stochastic Melnikov method is effective in predicting the onset of chaos in the quasi-Hamiltonian systems.

  9. Power probability density function control and performance assessment of a nuclear research reactor

    International Nuclear Information System (INIS)

    Abharian, Amir Esmaeili; Fadaei, Amir Hosein

    2014-01-01

    Highlights: • In this paper, the performance assessment of static PDF control system is discussed. • The reactor PDF model is set up based on the B-spline functions. • Acquaints of Nu, and Th-h. equations solve concurrently by reformed Hansen’s method. • A principle of performance assessment is put forward for the PDF of the NR control. - Abstract: One of the main issues in controlling a system is to keep track of the conditions of the system function. The performance condition of the system should be inspected continuously, to keep the system in reliable working condition. In this study, the nuclear reactor is considered as a complicated system and a principle of performance assessment is used for analyzing the performance of the power probability density function (PDF) of the nuclear research reactor control. First, the model of the power PDF is set up, then the controller is designed to make the power PDF for tracing the given shape, that make the reactor to be a closed-loop system. The operating data of the closed-loop reactor are used to assess the control performance with the performance assessment criteria. The modeling, controller design and the performance assessment of the power PDF are all applied to the control of Tehran Research Reactor (TRR) power in a nuclear process. In this paper, the performance assessment of the static PDF control system is discussed, the efficacy and efficiency of the proposed method are investigated, and finally its reliability is proven

  10. On the evolution of the density probability density function in strongly self-gravitating systems

    International Nuclear Information System (INIS)

    Girichidis, Philipp; Konstandin, Lukas; Klessen, Ralf S.; Whitworth, Anthony P.

    2014-01-01

    The time evolution of the probability density function (PDF) of the mass density is formulated and solved for systems in free-fall using a simple approximate function for the collapse of a sphere. We demonstrate that a pressure-free collapse results in a power-law tail on the high-density side of the PDF. The slope quickly asymptotes to the functional form P V (ρ)∝ρ –1.54 for the (volume-weighted) PDF and P M (ρ)∝ρ –0.54 for the corresponding mass-weighted distribution. From the simple approximation of the PDF we derive analytic descriptions for mass accretion, finding that dynamically quiet systems with narrow density PDFs lead to retarded star formation and low star formation rates (SFRs). Conversely, strong turbulent motions that broaden the PDF accelerate the collapse causing a bursting mode of star formation. Finally, we compare our theoretical work with observations. The measured SFRs are consistent with our model during the early phases of the collapse. Comparison of observed column density PDFs with those derived from our model suggests that observed star-forming cores are roughly in free-fall.

  11. Know the risk, take the win: how executive functions and probability processing influence advantageous decision making under risk conditions.

    Science.gov (United States)

    Brand, Matthias; Schiebener, Johannes; Pertl, Marie-Theres; Delazer, Margarete

    2014-01-01

    Recent models on decision making under risk conditions have suggested that numerical abilities are important ingredients of advantageous decision-making performance, but empirical evidence is still limited. The results of our first study show that logical reasoning and basic mental calculation capacities predict ratio processing and that ratio processing predicts decision making under risk. In the second study, logical reasoning together with executive functions predicted probability processing (numeracy and probability knowledge), and probability processing predicted decision making under risk. These findings suggest that increasing an individual's understanding of ratios and probabilities should lead to more advantageous decisions under risk conditions.

  12. Probability of detection - Comparative study of computed and film radiography for high-energy applications

    International Nuclear Information System (INIS)

    Venkatachalam, R.; Venugopal, M.; Prasad, T.

    2007-01-01

    Full text of publication follows: Suitability of computed radiography with Ir-192, Co-60 and up to 9 MeV x-rays for weld inspections is of importance to many heavy engineering and aerospace industries. CR is preferred because of lesser exposure and processing time as compared to film based radiography and also digital images offers other advantages such as image enhancements, quantitative measurements and easier archival. This paper describes systemic experimental approaches and image quality metrics to compare imaging performance of CR with film-based radiography. Experiments were designed using six-sigma methodology to validate performance of CR for steel thickness up to 160 mm with Ir- 192, Co-60 and x-ray energies varying from 100 kV up to 9 MeV. Weld specimens with defects such as lack of fusion, penetration, cracks, concavity, and porosities were studied for evaluating radiographic sensitivity and imaging performance of the system. Attempts were also made to quantify probability of detection using specimens with artificial and natural defects for various experimental conditions and were compared with film based systems. (authors)

  13. Probability of burn-through of defective 13 kA splices at increased energy levels

    CERN Document Server

    Verweij, A

    2011-01-01

    In many 13 kA splices in the machine there is a lack of bonding between the superconducting cable and the stabilising copper along with a bad contact between the bus stabiliser and the splice stabiliser. In case of a quench of such a defective splice, the current cannot bypass the cable through the copper, hence leading to excessive local heating of the cable. This may result in a thermal runaway and burn-through of the cable in a time smaller than the time constant of the circuit. Since it is not possible to protect against this fast thermal run-away, one has to limit the current to a level that is small enough so that a burn-through cannot occur. Prompt quenching of the joint, and quenching due to heat propagation through the bus and through the helium are considered. Probabilities for joint burn-through are given for the RB circuit for beam energies of 3.5, 4 and 4.5 TeV, and a decay time constant of the RB circuit of 50 and 68 s.

  14. Characterizing the Lyα forest flux probability distribution function using Legendre polynomials

    Energy Technology Data Exchange (ETDEWEB)

    Cieplak, Agnieszka M.; Slosar, Anže, E-mail: acieplak@bnl.gov, E-mail: anze@bnl.gov [Brookhaven National Laboratory, Bldg 510, Upton, NY, 11973 (United States)

    2017-10-01

    The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n -th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. We find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.

  15. Characterizing the Lyα forest flux probability distribution function using Legendre polynomials

    Science.gov (United States)

    Cieplak, Agnieszka M.; Slosar, Anže

    2017-10-01

    The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n-th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. We find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.

  16. Probability density function modeling of scalar mixing from concentrated sources in turbulent channel flow

    Science.gov (United States)

    Bakosi, J.; Franzese, P.; Boybeyi, Z.

    2007-11-01

    Dispersion of a passive scalar from concentrated sources in fully developed turbulent channel flow is studied with the probability density function (PDF) method. The joint PDF of velocity, turbulent frequency and scalar concentration is represented by a large number of Lagrangian particles. A stochastic near-wall PDF model combines the generalized Langevin model of Haworth and Pope [Phys. Fluids 29, 387 (1986)] with Durbin's [J. Fluid Mech. 249, 465 (1993)] method of elliptic relaxation to provide a mathematically exact treatment of convective and viscous transport with a nonlocal representation of the near-wall Reynolds stress anisotropy. The presence of walls is incorporated through the imposition of no-slip and impermeability conditions on particles without the use of damping or wall-functions. Information on the turbulent time scale is supplied by the gamma-distribution model of van Slooten et al. [Phys. Fluids 10, 246 (1998)]. Two different micromixing models are compared that incorporate the effect of small scale mixing on the transported scalar: the widely used interaction by exchange with the mean and the interaction by exchange with the conditional mean model. Single-point velocity and concentration statistics are compared to direct numerical simulation and experimental data at Reτ=1080 based on the friction velocity and the channel half width. The joint model accurately reproduces a wide variety of conditional and unconditional statistics in both physical and composition space.

  17. Exact probability function for bulk density and current in the asymmetric exclusion process

    Science.gov (United States)

    Depken, Martin; Stinchcombe, Robin

    2005-03-01

    We examine the asymmetric simple exclusion process with open boundaries, a paradigm of driven diffusive systems, having a nonequilibrium steady-state transition. We provide a full derivation and expanded discussion and digression on results previously reported briefly in M. Depken and R. Stinchcombe, Phys. Rev. Lett. 93, 040602 (2004). In particular we derive an exact form for the joint probability function for the bulk density and current, both for finite systems, and also in the thermodynamic limit. The resulting distribution is non-Gaussian, and while the fluctuations in the current are continuous at the continuous phase transitions, the density fluctuations are discontinuous. The derivations are done by using the standard operator algebraic techniques and by introducing a modified version of the original operator algebra. As a by-product of these considerations we also arrive at a very simple way of calculating the normalization constant appearing in the standard treatment with the operator algebra. Like the partition function in equilibrium systems, this normalization constant is shown to completely characterize the fluctuations, albeit in a very different manner.

  18. Faster exact Markovian probability functions for motif occurrences: a DFA-only approach.

    Science.gov (United States)

    Ribeca, Paolo; Raineri, Emanuele

    2008-12-15

    The computation of the statistical properties of motif occurrences has an obviously relevant application: patterns that are significantly over- or under-represented in genomes or proteins are interesting candidates for biological roles. However, the problem is computationally hard; as a result, virtually all the existing motif finders use fast but approximate scoring functions, in spite of the fact that they have been shown to produce systematically incorrect results. A few interesting exact approaches are known, but they are very slow and hence not practical in the case of realistic sequences. We give an exact solution, solely based on deterministic finite-state automata (DFA), to the problem of finding the whole relevant part of the probability distribution function of a simple-word motif in a homogeneous (biological) sequence. Out of that, the z-value can always be computed, while the P-value can be obtained either when it is not too extreme with respect to the number of floating-point digits available in the implementation, or when the number of pattern occurrences is moderately low. In particular, the time complexity of the algorithms for Markov models of moderate order (0 manage to obtain an algorithm which is both easily interpretable and efficient. This approach can be used for exact statistical studies of very long genomes and protein sequences, as we illustrate with some examples on the scale of the human genome.

  19. Identifying functional reorganization of spelling networks: an individual peak probability comparison approach

    Science.gov (United States)

    Purcell, Jeremy J.; Rapp, Brenda

    2013-01-01

    Previous research has shown that damage to the neural substrates of orthographic processing can lead to functional reorganization during reading (Tsapkini et al., 2011); in this research we ask if the same is true for spelling. To examine the functional reorganization of spelling networks we present a novel three-stage Individual Peak Probability Comparison (IPPC) analysis approach for comparing the activation patterns obtained during fMRI of spelling in a single brain-damaged individual with dysgraphia to those obtained in a set of non-impaired control participants. The first analysis stage characterizes the convergence in activations across non-impaired control participants by applying a technique typically used for characterizing activations across studies: Activation Likelihood Estimate (ALE) (Turkeltaub et al., 2002). This method was used to identify locations that have a high likelihood of yielding activation peaks in the non-impaired participants. The second stage provides a characterization of the degree to which the brain-damaged individual's activations correspond to the group pattern identified in Stage 1. This involves performing a Mahalanobis distance statistics analysis (Tsapkini et al., 2011) that compares each of a control group's peak activation locations to the nearest peak generated by the brain-damaged individual. The third stage evaluates the extent to which the brain-damaged individual's peaks are atypical relative to the range of individual variation among the control participants. This IPPC analysis allows for a quantifiable, statistically sound method for comparing an individual's activation pattern to the patterns observed in a control group and, thus, provides a valuable tool for identifying functional reorganization in a brain-damaged individual with impaired spelling. Furthermore, this approach can be applied more generally to compare any individual's activation pattern with that of a set of other individuals. PMID:24399981

  20. Ruin probabilities

    DEFF Research Database (Denmark)

    Asmussen, Søren; Albrecher, Hansjörg

    The book gives a comprehensive treatment of the classical and modern ruin probability theory. Some of the topics are Lundberg's inequality, the Cramér-Lundberg approximation, exact solutions, other approximations (e.g., for heavy-tailed claim size distributions), finite horizon ruin probabilities......, extensions of the classical compound Poisson model to allow for reserve-dependent premiums, Markov-modulation, periodicity, change of measure techniques, phase-type distributions as a computational vehicle and the connection to other applied probability areas, like queueing theory. In this substantially...... updated and extended second version, new topics include stochastic control, fluctuation theory for Levy processes, Gerber–Shiu functions and dependence....

  1. Estimating the population size and colony boundary of subterranean termites by using the density functions of directionally averaged capture probability.

    Science.gov (United States)

    Su, Nan-Yao; Lee, Sang-Hee

    2008-04-01

    Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.

  2. Using probability density function in the procedure for recognition of the type of physical exercise

    Directory of Open Access Journals (Sweden)

    Cakić Nikola

    2017-01-01

    Full Text Available This paper presents a method for recognition of physical exercises, using only a triaxial accelerometer of a smartphone. The smartphone itself is free to move inside subject's pocket. Exercises for leg muscle strengthening from subject's standing position squat, right knee rise and lunge with right leg were analyzed. All exercises were performed with the accelerometric sensor of a smartphone placed in the pocket next to the leg used for exercises. In order to test the proposed recognition method, the knee rise exercise of the opposite leg with the same position of the sensor was randomly selected. Filtering of the raw accelerometric signals was carried out using Butterworth tenth-order low-pass filter. The filtered signals from each of the three axes were described using three signal descriptors. After the descriptors were calculated, a probability density function was constructed for each of the descriptors. The program that implemented the proposed recognition method was executed online within an Android application of the smartphone. Signals from two male and two female subjects were considered as a reference for exercise recognition. The exercise recognition accuracy was 94.22% for three performed exercises, and 85.33% for all four considered exercises.

  3. Effects of translation-rotation coupling on the displacement probability distribution functions of boomerang colloidal particles

    Science.gov (United States)

    Chakrabarty, Ayan; Wang, Feng; Sun, Kai; Wei, Qi-Huo

    Prior studies have shown that low symmetry particles such as micro-boomerangs exhibit behaviour of Brownian motion rather different from that of high symmetry particles because convenient tracking points (TPs) are usually inconsistent with the center of hydrodynamic stress (CoH) where the translational and rotational motions are decoupled. In this paper we study the effects of the translation-rotation coupling on the displacement probability distribution functions (PDFs) of the boomerang colloid particles with symmetric arms. By tracking the motions of different points on the particle symmetry axis, we show that as the distance between the TP and the CoH is increased, the effects of translation-rotation coupling becomes pronounced, making the short-time 2D PDF for fixed initial orientation to change from elliptical to crescent shape and the angle averaged PDFs from ellipsoidal-particle-like PDF to a shape with a Gaussian top and long displacement tails. We also observed that at long times the PDFs revert to Gaussian. This crescent shape of 2D PDF provides a clear physical picture of the non-zero mean displacements observed in boomerangs particles.

  4. Characteristics of the probability function for three random-walk models of reaction--diffusion processes

    International Nuclear Information System (INIS)

    Musho, M.K.; Kozak, J.J.

    1984-01-01

    A method is presented for calculating exactly the relative width (sigma 2 )/sup 1/2// , the skewness γ 1 , and the kurtosis γ 2 characterizing the probability distribution function for three random-walk models of diffusion-controlled processes. For processes in which a diffusing coreactant A reacts irreversibly with a target molecule B situated at a reaction center, three models are considered. The first is the traditional one of an unbiased, nearest-neighbor random walk on a d-dimensional periodic/confining lattice with traps; the second involves the consideration of unbiased, non-nearest-neigh bor (i.e., variable-step length) walks on the same d-dimensional lattice; and, the third deals with the case of a biased, nearest-neighbor walk on a d-dimensional lattice (wherein a walker experiences a potential centered at the deep trap site of the lattice). Our method, which has been described in detail elsewhere [P.A. Politowicz and J. J. Kozak, Phys. Rev. B 28, 5549 (1983)] is based on the use of group theoretic arguments within the framework of the theory of finite Markov processes

  5. On the method of logarithmic cumulants for parametric probability density function estimation.

    Science.gov (United States)

    Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

    2013-10-01

    Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible.

  6. Probability density function of a puff dispersing from the wall of a turbulent channel

    Science.gov (United States)

    Nguyen, Quoc; Papavassiliou, Dimitrios

    2015-11-01

    Study of dispersion of passive contaminants in turbulence has proved to be helpful in understanding fundamental heat and mass transfer phenomena. Many simulation and experimental works have been carried out to locate and track motions of scalar markers in a flow. One method is to combine Direct Numerical Simulation (DNS) and Lagrangian Scalar Tracking (LST) to record locations of markers. While this has proved to be useful, high computational cost remains a concern. In this study, we develop a model that could reproduce results obtained by DNS and LST for turbulent flow. Puffs of markers with different Schmidt numbers were released into a flow field at a frictional Reynolds number of 150. The point of release was at the channel wall, so that both diffusion and convection contribute to the puff dispersion pattern, defining different stages of dispersion. Based on outputs from DNS and LST, we seek the most suitable and feasible probability density function (PDF) that represents distribution of markers in the flow field. The PDF would play a significant role in predicting heat and mass transfer in wall turbulence, and would prove to be helpful where DNS and LST are not always available.

  7. Probability distribution functions for ELM bursts in a series of JET tokamak discharges

    International Nuclear Information System (INIS)

    Greenhough, J; Chapman, S C; Dendy, R O; Ward, D J

    2003-01-01

    A novel statistical treatment of the full raw edge localized mode (ELM) signal from a series of previously studied JET plasmas is tested. The approach involves constructing probability distribution functions (PDFs) for ELM amplitudes and time separations, and quantifying the fit between the measured PDFs and model distributions (Gaussian, inverse exponential) and Poisson processes. Uncertainties inherent in the discreteness of the raw signal require the application of statistically rigorous techniques to distinguish ELM data points from background, and to extrapolate peak amplitudes. The accuracy of PDF construction is further constrained by the relatively small number of ELM bursts (several hundred) in each sample. In consequence the statistical technique is found to be difficult to apply to low frequency (typically Type I) ELMs, so the focus is narrowed to four JET plasmas with high frequency (typically Type III) ELMs. The results suggest that there may be several fundamentally different kinds of Type III ELMing process at work. It is concluded that this novel statistical treatment can be made to work, may have wider applications to ELM data, and has immediate practical value as an additional quantitative discriminant between classes of ELMing behaviour

  8. Representation of Probability Density Functions from Orbit Determination using the Particle Filter

    Science.gov (United States)

    Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell

    2012-01-01

    Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.

  9. Taking potential probability function maps to the local scale and matching them with land use maps

    Science.gov (United States)

    Garg, Saryu; Sinha, Vinayak; Sinha, Baerbel

    2013-04-01

    Source-Receptor models have been developed using different methods. Residence-time weighted concentration back trajectory analysis and Potential Source Contribution Function (PSCF) are the two most popular techniques for identification of potential sources of a substance in a defined geographical area. Both techniques use back trajectories calculated using global models and assign values of probability/concentration to various locations in an area. These values represent the probability of threshold exceedances / the average concentration measured at the receptor in air masses with a certain residence time over a source area. Both techniques, however, have only been applied to regional and long-range transport phenomena due to inherent limitation with respect to both spatial accuracy and temporal resolution of the of back trajectory calculations. Employing the above mentioned concepts of residence time weighted concentration back-trajectory analysis and PSCF, we developed a source-receptor model capable of identifying local and regional sources of air pollutants like Particulate Matter (PM), NOx, SO2 and VOCs. We use 1 to 30 minute averages of concentration values and wind direction and speed from a single receptor site or from multiple receptor sites to trace the air mass back in time. The model code assumes all the atmospheric transport to be Lagrangian and linearly extrapolates air masses reaching the receptor location, backwards in time for a fixed number of steps. We restrict the model run to the lifetime of the chemical species under consideration. For long lived species the model run is limited to 180 trees/gridsquare); moderate concentrations for agricultural lands with low tree density (1.5-2.5 ppbv for 250 μg/m3 for traffic hotspots in Chandigarh City are observed. Based on the validation against the land use maps, the model appears to do an excellent job in source apportionment and identifying emission hotspots. Acknowledgement: We thank the IISER

  10. Probability distribution functions of turbulence in seepage-affected alluvial channel

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Anurag; Kumar, Bimlesh, E-mail: anurag.sharma@iitg.ac.in, E-mail: bimk@iitg.ac.in [Department of Civil Engineering, Indian Institute of Technology Guwahati, 781039 (India)

    2017-02-15

    The present experimental study is carried out on the probability distribution functions (PDFs) of turbulent flow characteristics within near-bed-surface and away-from-bed surfaces for both no seepage and seepage flow. Laboratory experiments were conducted in the plane sand bed for no seepage (NS), 10% seepage (10%S) and 15% seepage (15%) cases. The experimental calculation of the PDFs of turbulent parameters such as Reynolds shear stress, velocity fluctuations, and bursting events is compared with theoretical expression obtained by Gram–Charlier (GC)-based exponential distribution. Experimental observations follow the computed PDF distributions for both no seepage and seepage cases. Jensen-Shannon divergence (JSD) method is used to measure the similarity between theoretical and experimental PDFs. The value of JSD for PDFs of velocity fluctuation lies between 0.0005 to 0.003 while the JSD value for PDFs of Reynolds shear stress varies between 0.001 to 0.006. Even with the application of seepage, the PDF distribution of bursting events, sweeps and ejections are well characterized by the exponential distribution of the GC series, except that a slight deflection of inward and outward interactions is observed which may be due to weaker events. The value of JSD for outward and inward interactions ranges from 0.0013 to 0.032, while the JSD value for sweep and ejection events varies between 0.0001 to 0.0025. The theoretical expression for the PDF of turbulent intensity is developed in the present study, which agrees well with the experimental observations and JSD lies between 0.007 and 0.015. The work presented is potentially applicable to the probability distribution of mobile-bed sediments in seepage-affected alluvial channels typically characterized by the various turbulent parameters. The purpose of PDF estimation from experimental data is that it provides a complete numerical description in the areas of turbulent flow either at a single or finite number of points

  11. Three-dimensional analytic probabilities of coupled vibrational-rotational-translational energy transfer for DSMC modeling of nonequilibrium flows

    International Nuclear Information System (INIS)

    Adamovich, Igor V.

    2014-01-01

    A three-dimensional, nonperturbative, semiclassical analytic model of vibrational energy transfer in collisions between a rotating diatomic molecule and an atom, and between two rotating diatomic molecules (Forced Harmonic Oscillator–Free Rotation model) has been extended to incorporate rotational relaxation and coupling between vibrational, translational, and rotational energy transfer. The model is based on analysis of semiclassical trajectories of rotating molecules interacting by a repulsive exponential atom-to-atom potential. The model predictions are compared with the results of three-dimensional close-coupled semiclassical trajectory calculations using the same potential energy surface. The comparison demonstrates good agreement between analytic and numerical probabilities of rotational and vibrational energy transfer processes, over a wide range of total collision energies, rotational energies, and impact parameter. The model predicts probabilities of single-quantum and multi-quantum vibrational-rotational transitions and is applicable up to very high collision energies and quantum numbers. Closed-form analytic expressions for these transition probabilities lend themselves to straightforward incorporation into DSMC nonequilibrium flow codes

  12. Electroweak splitting functions and high energy showering

    Science.gov (United States)

    Chen, Junmou; Han, Tao; Tweedie, Brock

    2017-11-01

    We derive the electroweak (EW) collinear splitting functions for the Standard Model, including the massive fermions, gauge bosons and the Higgs boson. We first present the splitting functions in the limit of unbroken SU(2) L × U(1) Y and discuss their general features in the collinear and soft-collinear regimes. These are the leading contributions at a splitting scale ( k T ) far above the EW scale ( v). We then systematically incorporate EW symmetry breaking (EWSB), which leads to the emergence of additional "ultra-collinear" splitting phenomena and naive violations of the Goldstone-boson Equivalence Theorem. We suggest a particularly convenient choice of non-covariant gauge (dubbed "Goldstone Equivalence Gauge") that disentangles the effects of Goldstone bosons and gauge fields in the presence of EWSB, and allows trivial book-keeping of leading power corrections in v/ k T . We implement a comprehensive, practical EW showering scheme based on these splitting functions using a Sudakov evolution formalism. Novel features in the implementation include a complete accounting of ultra-collinear effects, matching between shower and decay, kinematic back-reaction corrections in multi-stage showers, and mixed-state evolution of neutral bosons ( γ/ Z/ h) using density-matrices. We employ the EW showering formalism to study a number of important physical processes at O (1-10 TeV) energies. They include (a) electroweak partons in the initial state as the basis for vector-boson-fusion; (b) the emergence of "weak jets" such as those initiated by transverse gauge bosons, with individual splitting probabilities as large as O (35%); (c) EW showers initiated by top quarks, including Higgs bosons in the final state; (d) the occurrence of O (1) interference effects within EW showers involving the neutral bosons; and (e) EW corrections to new physics processes, as illustrated by production of a heavy vector boson ( W ') and the subsequent showering of its decay products.

  13. Multi-functional energy plantation; Multifunktionella bioenergiodlingar

    Energy Technology Data Exchange (ETDEWEB)

    Boerjesson, Paal [Lund Univ. (Sweden). Environmental and Energy Systems Studies; Berndes, Goeran; Fredriksson, Fredrik [Chalmers Univ. of Technology, Goeteborg (Sweden). Dept. of Physical Resource Theory; Kaaberger, Tomas [Ecotraffic, Goeteborg (Sweden)

    2002-02-01

    future if this problem will be valued differently. The value of increased carbon accumulation in mineral soils and reduced carbon dioxide emissions from organic soils is estimated to be equivalent to a few percent and half the production cost in conventional Salix plantations, respectively. These values may also change in the future if carbon sinks in agriculture will be included as an approved mitigation option within the Kyoto agreement. Based on an analysis of possible combinations of environmental services achieved in specific plantations, it is estimated that biomass can be produced to an negative cost in around 100,000 hectares of multi-functional energy plantations, when the value of the environmental services is included. The production cost in another 250,000 hectares of plantations is estimated to be halved. This is equivalent to around 6 and 11 TWh biomass per year, respectively. Economic incentives also exist for municipal wastewater plants for utilising vegetation filters for wastewater and sewage sludge treatment. Cadmium removal and increased soil fertility will give a minor increase in the income for the farmer. However, cadmium removal will result in increased costs later in the Salix fuel chain, due to increased costs of flue gas cleaning during combustion. Thus, to overcome this economic barrier, subsidies will probably be needed to heating plants utilising cadmium-contaminated biomass. The possibilities of achieving an income from increased soil carbon accumulation will depend on if this option will be an approved mechanism. Today, the Swedish greenhouse gas mitigation policy does not include this option. Some of the potential multi-functional energy plantations (e.g. buffer strips for reducing nutrient leaching and vegetation filters for treatment of polluted drainage water) results in increased cultivation costs for the farmer, thus increased economic barriers. Examples of measures to overcome such barriers are dedicated subsidies for multi-functional

  14. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    Science.gov (United States)

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  15. Relativistic calculation of Kβ hypersatellite energies and transition probabilities for selected atoms with 13 ≤ Z ≤ 80

    International Nuclear Information System (INIS)

    Costa, A M; Martins, M C; Santos, J P; Indelicato, P; Parente, F

    2006-01-01

    Energies and transition probabilities of Kβ hypersatellite lines are computed using the Dirac-Fock model for several values of Z throughout the periodic table. The influence of the Breit interaction on the energy shifts from the corresponding diagram lines and on the Kβ h 1 /Kβ h 3 intensity ratio is evaluated. The widths of the double-K hole levels are calculated for Al and Sc. The results are compared to experiment and to other theoretical calculations

  16. Probability density functions for radial anisotropy: implications for the upper 1200 km of the mantle

    Science.gov (United States)

    Beghein, Caroline; Trampert, Jeannot

    2004-01-01

    The presence of radial anisotropy in the upper mantle, transition zone and top of the lower mantle is investigated by applying a model space search technique to Rayleigh and Love wave phase velocity models. Probability density functions are obtained independently for S-wave anisotropy, P-wave anisotropy, intermediate parameter η, Vp, Vs and density anomalies. The likelihoods for P-wave and S-wave anisotropy beneath continents cannot be explained by a dry olivine-rich upper mantle at depths larger than 220 km. Indeed, while shear-wave anisotropy tends to disappear below 220 km depth in continental areas, P-wave anisotropy is still present but its sign changes compared to the uppermost mantle. This could be due to an increase with depth of the amount of pyroxene relative to olivine in these regions, although the presence of water, partial melt or a change in the deformation mechanism cannot be ruled out as yet. A similar observation is made for old oceans, but not for young ones where VSH> VSV appears likely down to 670 km depth and VPH> VPV down to 400 km depth. The change of sign in P-wave anisotropy seems to be qualitatively correlated with the presence of the Lehmann discontinuity, generally observed beneath continents and some oceans but not beneath ridges. Parameter η shows a similar age-related depth pattern as shear-wave anisotropy in the uppermost mantle and it undergoes the same change of sign as P-wave anisotropy at 220 km depth. The ratio between dln Vs and dln Vp suggests that a chemical component is needed to explain the anomalies in most places at depths greater than 220 km. More tests are needed to infer the robustness of the results for density, but they do not affect the results for anisotropy.

  17. Joint Bayesian Estimation of Quasar Continua and the Lyα Forest Flux Probability Distribution Function

    Science.gov (United States)

    Eilers, Anna-Christina; Hennawi, Joseph F.; Lee, Khee-Gan

    2017-08-01

    We present a new Bayesian algorithm making use of Markov Chain Monte Carlo sampling that allows us to simultaneously estimate the unknown continuum level of each quasar in an ensemble of high-resolution spectra, as well as their common probability distribution function (PDF) for the transmitted Lyα forest flux. This fully automated PDF regulated continuum fitting method models the unknown quasar continuum with a linear principal component analysis (PCA) basis, with the PCA coefficients treated as nuisance parameters. The method allows one to estimate parameters governing the thermal state of the intergalactic medium (IGM), such as the slope of the temperature-density relation γ -1, while marginalizing out continuum uncertainties in a fully Bayesian way. Using realistic mock quasar spectra created from a simplified semi-numerical model of the IGM, we show that this method recovers the underlying quasar continua to a precision of ≃ 7 % and ≃ 10 % at z = 3 and z = 5, respectively. Given the number of principal component spectra, this is comparable to the underlying accuracy of the PCA model itself. Most importantly, we show that we can achieve a nearly unbiased estimate of the slope γ -1 of the IGM temperature-density relation with a precision of +/- 8.6 % at z = 3 and +/- 6.1 % at z = 5, for an ensemble of ten mock high-resolution quasar spectra. Applying this method to real quasar spectra and comparing to a more realistic IGM model from hydrodynamical simulations would enable precise measurements of the thermal and cosmological parameters governing the IGM, albeit with somewhat larger uncertainties, given the increased flexibility of the model.

  18. Probability-1

    CERN Document Server

    Shiryaev, Albert N

    2016-01-01

    This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, the measure-theoretic foundations of probability theory, weak convergence of probability measures, and the central limit theorem. Many examples are discussed in detail, and there are a large number of exercises. The book is accessible to advanced undergraduates and can be used as a text for independent study. To accommodate the greatly expanded material in the third edition of Probability, the book is now divided into two volumes. This first volume contains updated references and substantial revisions of the first three chapters of the second edition. In particular, new material has been added on generating functions, the inclusion-exclusion principle, theorems on monotonic classes (relying on a detailed treatment of “π-λ” systems), and the fundamental theorems of mathematical statistics.

  19. ON PROBABILITY FUNCTION OF TRIP ROUTE CHOICE IN PASSENGER TRANSPORT SYSTEM OF CITIES

    Directory of Open Access Journals (Sweden)

    N. Nefedof

    2014-02-01

    Full Text Available The results of statistical processing of experimental research data in Kharkiv, aimed at determining the relation between the passenger trip choice probability and the actual vehicles waiting time at bus terminals are presented.

  20. A research on the importance function used in the calculation of the fracture probability through the optimum method

    International Nuclear Information System (INIS)

    Zegong, Zhou; Changhong, Liu

    1995-01-01

    On the basis of the research into original distribution function as the importance function after shifting an appropriate distance, this paper takes the variation of similar ratio of the original function to the importance function as the objective function, the optimum shifting distance obtained by use of an optimization method. The optimum importance function resulting from the optimization method can ensure that the number of Monte Carlo simulations is decreased and at the same time the good estimates of the yearly failure probabilities are obtained

  1. A tool for simulating collision probabilities of animals with marine renewable energy devices.

    Directory of Open Access Journals (Sweden)

    Pál Schmitt

    Full Text Available The mathematical problem of establishing a collision probability distribution is often not trivial. The shape and motion of the animal as well as of the the device must be evaluated in a four-dimensional space (3D motion over time. Earlier work on wind and tidal turbines was limited to a simplified two-dimensional representation, which cannot be applied to many new structures. We present a numerical algorithm to obtain such probability distributions using transient, three-dimensional numerical simulations. The method is demonstrated using a sub-surface tidal kite as an example. Necessary pre- and post-processing of the data created by the model is explained, numerical details and potential issues and limitations in the application of resulting probability distributions are highlighted.

  2. A tool for simulating collision probabilities of animals with marine renewable energy devices.

    Science.gov (United States)

    Schmitt, Pál; Culloch, Ross; Lieber, Lilian; Molander, Sverker; Hammar, Linus; Kregting, Louise

    2017-01-01

    The mathematical problem of establishing a collision probability distribution is often not trivial. The shape and motion of the animal as well as of the the device must be evaluated in a four-dimensional space (3D motion over time). Earlier work on wind and tidal turbines was limited to a simplified two-dimensional representation, which cannot be applied to many new structures. We present a numerical algorithm to obtain such probability distributions using transient, three-dimensional numerical simulations. The method is demonstrated using a sub-surface tidal kite as an example. Necessary pre- and post-processing of the data created by the model is explained, numerical details and potential issues and limitations in the application of resulting probability distributions are highlighted.

  3. Spatial correlations and probability density function of the phase difference in a developed speckle-field: numerical and natural experiments

    International Nuclear Information System (INIS)

    Mysina, N Yu; Maksimova, L A; Ryabukho, V P; Gorbatenko, B B

    2015-01-01

    Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the results of numerical experiments. (laser applications and other topics in quantum electronics)

  4. Probability of Interference-Optimal and Energy-Efficient Analysis for Topology Control in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Ning Li

    2016-11-01

    Full Text Available Because wireless sensor networks (WSNs have been widely used in recent years, how to reduce their energy consumption and interference has become a major issue. Topology control is a common and effective approach to improve network performance, such as reducing the energy consumption and network interference, improving the network connectivity, etc. Many topology control algorithms reduce network interference by dynamically adjusting the node transmission range. However, reducing the network interference by adjusting the transmission range is probabilistic. Therefore, in this paper, we analyze the probability of interference-optimality for the WSNs and prove that the probability of interference-optimality increases with the increasing of the original transmission range. Under a specific transmission range, the probability reaches the maximum value when the transmission range is 0.85r in homogeneous networks and 0.84r in heterogeneous networks. In addition, we also prove that when the network is energy-efficient, the network is also interference-optimal with probability 1 both in the homogeneous and heterogeneous networks.

  5. A microcomputer program for energy assessment and aggregation using the triangular probability distribution

    Science.gov (United States)

    Crovelli, R.A.; Balay, R.H.

    1991-01-01

    A general risk-analysis method was developed for petroleum-resource assessment and other applications. The triangular probability distribution is used as a model with an analytic aggregation methodology based on probability theory rather than Monte-Carlo simulation. Among the advantages of the analytic method are its computational speed and flexibility, and the saving of time and cost on a microcomputer. The input into the model consists of a set of components (e.g. geologic provinces) and, for each component, three potential resource estimates: minimum, most likely (mode), and maximum. Assuming a triangular probability distribution, the mean, standard deviation, and seven fractiles (F100, F95, F75, F50, F25, F5, and F0) are computed for each component, where for example, the probability of more than F95 is equal to 0.95. The components are aggregated by combining the means, standard deviations, and respective fractiles under three possible siutations (1) perfect positive correlation, (2) complete independence, and (3) any degree of dependence between these two polar situations. A package of computer programs named the TRIAGG system was written in the Turbo Pascal 4.0 language for performing the analytic probabilistic methodology. The system consists of a program for processing triangular probability distribution assessments and aggregations, and a separate aggregation routine for aggregating aggregations. The user's documentation and program diskette of the TRIAGG system are available from USGS Open File Services. TRIAGG requires an IBM-PC/XT/AT compatible microcomputer with 256kbyte of main memory, MS-DOS 3.1 or later, either two diskette drives or a fixed disk, and a 132 column printer. A graphics adapter and color display are optional. ?? 1991.

  6. Probability Density Functions for the CALIPSO Lidar Version 4 Cloud-Aerosol Discrimination (CAD) Algorithm

    Science.gov (United States)

    Liu, Z.; Kar, J.; Zeng, S.; Tackett, J. L.; Vaughan, M.; Trepte, C. R.; Omar, A. H.; Hu, Y.; Winker, D. M.

    2017-12-01

    In the CALIPSO retrieval algorithm, detection layers in the lidar measurements is followed by their classification as a "cloud" or "aerosol" using 5-dimensional probability density functions (PDFs). The five dimensions are the mean attenuated backscatter at 532 nm, the layer integrated total attenuated color ratio, the mid-layer altitude, integrated volume depolarization ratio and latitude. The new version 4 (V4) level 2 (L2) data products, released in November 2016, are the first major revision to the L2 product suite since May 2010. Significant calibration changes in the V4 level 1 data necessitated substantial revisions to the V4 L2 CAD algorithm. Accordingly, a new set of PDFs was generated to derive the V4 L2 data products. The V4 CAD algorithm is now applied to layers detected in the stratosphere, where volcanic layers and occasional cloud and smoke layers are observed. Previously, these layers were designated as `stratospheric', and not further classified. The V4 CAD algorithm is also applied to all layers detected at single shot (333 m) resolution. In prior data releases, single shot detections were uniformly classified as clouds. The CAD PDFs used in the earlier releases were generated using a full year (2008) of CALIPSO measurements. Because the CAD algorithm was not applied to stratospheric features, the properties of these layers were not incorporated into the PDFs. When building the V4 PDFs, the 2008 data were augmented with additional data from June 2011, and all stratospheric features were included. The Nabro and Puyehue-Cordon volcanos erupted in June 2011, and volcanic aerosol layers were observed in the upper troposphere and lower stratosphere in both the northern and southern hemispheres. The June 2011 data thus provides the stratospheric aerosol properties needed for comprehensive PDF generation. In contrast to earlier versions of the PDFs, which were generated based solely on observed distributions, construction of the V4 PDFs considered the

  7. Kinetic and dynamic probability-density-function descriptions of disperse turbulent two-phase flows

    Science.gov (United States)

    Minier, Jean-Pierre; Profeta, Christophe

    2015-11-01

    This article analyzes the status of two classical one-particle probability density function (PDF) descriptions of the dynamics of discrete particles dispersed in turbulent flows. The first PDF formulation considers only the process made up by particle position and velocity Zp=(xp,Up) and is represented by its PDF p (t ;yp,Vp) which is the solution of a kinetic PDF equation obtained through a flux closure based on the Furutsu-Novikov theorem. The second PDF formulation includes fluid variables into the particle state vector, for example, the fluid velocity seen by particles Zp=(xp,Up,Us) , and, consequently, handles an extended PDF p (t ;yp,Vp,Vs) which is the solution of a dynamic PDF equation. For high-Reynolds-number fluid flows, a typical formulation of the latter category relies on a Langevin model for the trajectories of the fluid seen or, conversely, on a Fokker-Planck equation for the extended PDF. In the present work, a new derivation of the kinetic PDF equation is worked out and new physical expressions of the dispersion tensors entering the kinetic PDF equation are obtained by starting from the extended PDF and integrating over the fluid seen. This demonstrates that, under the same assumption of a Gaussian colored noise and irrespective of the specific stochastic model chosen for the fluid seen, the kinetic PDF description is the marginal of a dynamic PDF one. However, a detailed analysis reveals that kinetic PDF models of particle dynamics in turbulent flows described by statistical correlations constitute incomplete stand-alone PDF descriptions and, moreover, that present kinetic-PDF equations are mathematically ill posed. This is shown to be the consequence of the non-Markovian characteristic of the stochastic process retained to describe the system and the use of an external colored noise. Furthermore, developments bring out that well-posed PDF descriptions are essentially due to a proper choice of the variables selected to describe physical systems

  8. Potential energy function of CN-

    Czech Academy of Sciences Publication Activity Database

    Špirko, Vladimír; Polák, Rudolf

    2008-01-01

    Roč. 248, č. 1 (2008), s. 77-80 ISSN 0022-2852 R&D Projects: GA MŠk LC512; GA AV ČR IAA400550511; GA AV ČR IAA400400504 Institutional research plan: CEZ:AV0Z40550506; CEZ:AV0Z40400503 Keywords : potential energy curve * fundamental transition * spectroscopic constants Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.636, year: 2008

  9. Non-monotonic probability of thermal reversal in thin-film biaxial nanomagnets with small energy barriers

    Directory of Open Access Journals (Sweden)

    N. Kani

    2017-05-01

    Full Text Available The goal of this paper is to investigate the short time-scale, thermally-induced probability of magnetization reversal for an biaxial nanomagnet that is characterized with a biaxial magnetic anisotropy. For the first time, we clearly show that for a given energy barrier of the nanomagnet, the magnetization reversal probability of an biaxial nanomagnet exhibits a non-monotonic dependence on its saturation magnetization. Specifically, there are two reasons for this non-monotonic behavior in rectangular thin-film nanomagnets that have a large perpendicular magnetic anisotropy. First, a large perpendicular anisotropy lowers the precessional period of the magnetization making it more likely to precess across the x^=0 plane if the magnetization energy exceeds the energy barrier. Second, the thermal-field torque at a particular energy increases as the magnitude of the perpendicular anisotropy increases during the magnetization precession. This non-monotonic behavior is most noticeable when analyzing the magnetization reversals on time-scales up to several tens of ns. In light of the several proposals of spintronic devices that require data retention on time-scales up to 10’s of ns, understanding the probability of magnetization reversal on the short time-scales is important. As such, the results presented in this paper will be helpful in quantifying the reliability and noise sensitivity of spintronic devices in which thermal noise is inevitably present.

  10. Risk Probabilities

    DEFF Research Database (Denmark)

    Rojas-Nandayapa, Leonardo

    Tail probabilities of sums of heavy-tailed random variables are of a major importance in various branches of Applied Probability, such as Risk Theory, Queueing Theory, Financial Management, and are subject to intense research nowadays. To understand their relevance one just needs to think...... analytic expression for the distribution function of a sum of random variables. The presence of heavy-tailed random variables complicates the problem even more. The objective of this dissertation is to provide better approximations by means of sharp asymptotic expressions and Monte Carlo estimators...

  11. A Dual Function Energy Store

    Directory of Open Access Journals (Sweden)

    Ron Tolmie

    2014-11-01

    Full Text Available Heat can be collected from local energy sources and concentrated into a relatively small volume, and at a useful working temperature, by using a heat pump as the concentrator. That heat can be stored and utilized at a later date for applications like space heating. The process is doing two things at the same time: storing heat and shifting the power demand. The concentration step can be done at night when there is normally a surplus of power and its timing can be directly controlled by the power grid operator to ensure that the power consumption occurs only when adequate power is available. The sources of heat can be the summer air, the heat extracted from buildings by their cooling systems, natural heat from the ground or solar heat, all of which are free, abundant and readily accessible. Such systems can meet the thermal needs of buildings while at the same time stabilizing the grid power demand, thus reducing the need for using fossil-fuelled peaking power generators. The heat pump maintains the temperature of the periphery at the ambient ground temperature so very little energy is lost during storage.

  12. Quantum Probabilities as Behavioral Probabilities

    Directory of Open Access Journals (Sweden)

    Vyacheslav I. Yukalov

    2017-03-01

    Full Text Available We demonstrate that behavioral probabilities of human decision makers share many common features with quantum probabilities. This does not imply that humans are some quantum objects, but just shows that the mathematics of quantum theory is applicable to the description of human decision making. The applicability of quantum rules for describing decision making is connected with the nontrivial process of making decisions in the case of composite prospects under uncertainty. Such a process involves deliberations of a decision maker when making a choice. In addition to the evaluation of the utilities of considered prospects, real decision makers also appreciate their respective attractiveness. Therefore, human choice is not based solely on the utility of prospects, but includes the necessity of resolving the utility-attraction duality. In order to justify that human consciousness really functions similarly to the rules of quantum theory, we develop an approach defining human behavioral probabilities as the probabilities determined by quantum rules. We show that quantum behavioral probabilities of humans do not merely explain qualitatively how human decisions are made, but they predict quantitative values of the behavioral probabilities. Analyzing a large set of empirical data, we find good quantitative agreement between theoretical predictions and observed experimental data.

  13. Influence of the level of fit of a density probability function to wind-speed data on the WECS mean power output estimation

    International Nuclear Information System (INIS)

    Carta, Jose A.; Ramirez, Penelope; Velazquez, Sergio

    2008-01-01

    Static methods which are based on statistical techniques to estimate the mean power output of a WECS (wind energy conversion system) have been widely employed in the scientific literature related to wind energy. In the static method which we use in this paper, for a given wind regime probability distribution function and a known WECS power curve, the mean power output of a WECS is obtained by resolving the integral, usually using numerical evaluation techniques, of the product of these two functions. In this paper an analysis is made of the influence of the level of fit between an empirical probability density function of a sample of wind speeds and the probability density function of the adjusted theoretical model on the relative error ε made in the estimation of the mean annual power output of a WECS. The mean power output calculated through the use of a quasi-dynamic or chronological method, that is to say using time-series of wind speed data and the power versus wind speed characteristic of the wind turbine, serves as the reference. The suitability of the distributions is judged from the adjusted R 2 statistic (R a 2 ). Hourly mean wind speeds recorded at 16 weather stations located in the Canarian Archipelago, an extensive catalogue of wind-speed probability models and two wind turbines of 330 and 800 kW rated power are used in this paper. Among the general conclusions obtained, the following can be pointed out: (a) that the R a 2 statistic might be useful as an initial gross indicator of the relative error made in the mean annual power output estimation of a WECS when a probabilistic method is employed; (b) the relative errors tend to decrease, in accordance with a trend line defined by a second-order polynomial, as R a 2 increases

  14. Influence of the level of fit of a density probability function to wind-speed data on the WECS mean power output estimation

    Energy Technology Data Exchange (ETDEWEB)

    Carta, Jose A. [Department of Mechanical Engineering, University of Las Palmas de Gran Canaria, Campus de Tafira s/n, 35017 Las Palmas de Gran Canaria, Canary Islands (Spain); Ramirez, Penelope; Velazquez, Sergio [Department of Renewable Energies, Technological Institute of the Canary Islands, Pozo Izquierdo Beach s/n, 35119 Santa Lucia, Gran Canaria, Canary Islands (Spain)

    2008-10-15

    Static methods which are based on statistical techniques to estimate the mean power output of a WECS (wind energy conversion system) have been widely employed in the scientific literature related to wind energy. In the static method which we use in this paper, for a given wind regime probability distribution function and a known WECS power curve, the mean power output of a WECS is obtained by resolving the integral, usually using numerical evaluation techniques, of the product of these two functions. In this paper an analysis is made of the influence of the level of fit between an empirical probability density function of a sample of wind speeds and the probability density function of the adjusted theoretical model on the relative error {epsilon} made in the estimation of the mean annual power output of a WECS. The mean power output calculated through the use of a quasi-dynamic or chronological method, that is to say using time-series of wind speed data and the power versus wind speed characteristic of the wind turbine, serves as the reference. The suitability of the distributions is judged from the adjusted R{sup 2} statistic (R{sub a}{sup 2}). Hourly mean wind speeds recorded at 16 weather stations located in the Canarian Archipelago, an extensive catalogue of wind-speed probability models and two wind turbines of 330 and 800 kW rated power are used in this paper. Among the general conclusions obtained, the following can be pointed out: (a) that the R{sub a}{sup 2} statistic might be useful as an initial gross indicator of the relative error made in the mean annual power output estimation of a WECS when a probabilistic method is employed; (b) the relative errors tend to decrease, in accordance with a trend line defined by a second-order polynomial, as R{sub a}{sup 2} increases. (author)

  15. Universal Nuclear Energy Density Functional

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, Joseph; Furnstahl, Richard; Horoi, Mihai; Lusk, Rusty; Nazarewicz, Witold; Ng, Esmond; Thompson, Ian; Vary, James

    2012-12-01

    An understanding of the properties of atomic nuclei is crucial for a complete nuclear theory, for element formation, for properties of stars, and for present and future energy and defense applications. During the period of Dec. 1 2006 – Jun. 30, 2012, the UNEDF collaboration carried out a comprehensive study of all nuclei, based on the most accurate knowledge of the strong nuclear interaction, the most reliable theoretical approaches, the most advanced algorithms, and extensive computational resources, with a view towards scaling to the petaflop platforms and beyond. Until recently such an undertaking was hard to imagine, and even at the present time such an ambitious endeavor would be far beyond what a single researcher or a traditional research group could carry out.

  16. Probability of spin flipping of proton with energy 6.9 MeV at inelastic scattering with sup(54,56)Fe nuclei

    International Nuclear Information System (INIS)

    Prokopenko, V.S.; Sklyarenko, V.; Chernievskij, V.K.; Shustov, A.V.

    1980-01-01

    Spin-orbital effects of inelastic scattering of protons by nuclei with mean atomic weight are investigated along with the mechanisms of the reaction course by measuring proton spin flip. The experiment consists in measuring proton-gamma coincidences in mutually perpendicular planes by the technique of quick-slow coincidences. The excitation function of the 56 Fe(P,P 1 ) reaction is measured in the 3.5-6.2 MeV energy range. Angular dependences of probability of proton spin flip (a level of 2 + , 0.847 MeV) are measured at energies of incident protons of 4.96; 5.58 and 5.88 MeV. Measurements of probabilities of proton spin flipping at inelastic scattering by sup(54,56)Fe nuclei are performed in the process of studying spin-orbital effects and mechanisms of the reaction course. A conclusion is made that the inelastic scattering process in the energy range under investigation is mainly realized by two equivalent mechanisms: direct interaction and formation of a compound nucleus. Angular dependences for 54 Fe and 56 Fe noticeably differ in the values of probability of spin flip in the angular range of 50-150 deg

  17. Building a universal nuclear energy density functional

    International Nuclear Information System (INIS)

    Bertsch, G F

    2007-01-01

    This talk describes a new project in SciDAC II in the area of low-energy nuclear physics. The motivation and goals of the SciDAC are presented as well as an outline of the theoretical and computational methodology that will be employed. An important motivation is to have more accurate and reliable predictions of nuclear properties including their binding energies and low-energy reaction rates. The theoretical basis is provided by density functional theory, which the only available theory that can be systematically applied to all nuclei. However, other methodologies based on wave function methods are needed to refine the functionals and to make applications to dynamic processes

  18. An investigation of student understanding of classical ideas related to quantum mechanics: Potential energy diagrams and spatial probability density

    Science.gov (United States)

    Stephanik, Brian Michael

    This dissertation describes the results of two related investigations into introductory student understanding of ideas from classical physics that are key elements of quantum mechanics. One investigation probes the extent to which students are able to interpret and apply potential energy diagrams (i.e., graphs of potential energy versus position). The other probes the extent to which students are able to reason classically about probability and spatial probability density. The results of these investigations revealed significant conceptual and reasoning difficulties that students encounter with these topics. The findings guided the design of instructional materials to address the major problems. Results from post-instructional assessments are presented that illustrate the impact of the curricula on student learning.

  19. Probable Effects Of Exposure To Electromagnetic Waves Emitted From Video Display Terminals On Ocular Functions

    International Nuclear Information System (INIS)

    Ahmed, M.A.

    2013-01-01

    There is growing body of evidence that usage of computers can adversely affect the visual health. Considering the rising number of computer users in Egypt, computer-related visual symptoms might take an epidemic form. In view of that, this study was undertaken to find out the magnitude of the visual problems in computer operators and its relationship with various personal and workplace factors. Aim: To evaluate the probable effects of exposure to electromagnetic waves radiated from visual display terminals on some visual functions. Subjects and Methods: hundred fifty computer operators working in different institutes were randomly selected. They were asked to fill a pre-tested questionnaire (written in Arabic), after obtaining their verbal consent. The selected exposed subjects were were subjected to the following clinical assessment: 1-Visual acuity measurements 2-Refraction (using autorefractometer). 3- Measurements of the ocular dryness defects using the following different diagnostic tests: Schirmer test-,Fluorescein staining , Rose Bengal staining, Tear Break Up Time (TBUT) and LIPCOF test (lid parallel conjunctival fold). A control group included hundred fifty participants, they are working in a field does not necessitate exposure to video display terminals. Inclusion criteria of the subjects were as follows: minimum three symptoms of computer vision syndrome (CVS), minimum one year exposure to (VDT, s) and minimum 6 hs/day in 5working days/week. Exclusion criteria included candidates having ocular pathology like: glaucoma, optic atrophy, diabetic retinopathy, papilledema The following complaints were studied: 1-Tired eyes. 2- Burning eyes with excessive tear production. 3-Dry sore eyes 4-Blurred near vision (letters on the screen run together). 5-Asthenopia. 6-Neck, shoulder and back aches, overall bodily fatigue or tiredness. An interventional protective measure for the selected subjects from the exposed group was administered, it included the following (1

  20. Joint Probability Distribution Function for the Electric Microfield and its Ion-Octupole Inhomogeneity Tensor

    International Nuclear Information System (INIS)

    Halenka, J.; Olchawa, W.

    2005-01-01

    From experiments, see e.g. [W. Wiese, D. Kelleher, and D. Paquette, Phys. Rev. A 6, 1132 (1972); V. Helbig and K. Nich, J. Phys. B 14, 3573 (1981).; J. Halenka, Z. Phys. D 16, 1 (1990); . Djurovic, D. Nikolic, I. Savic, S. Sorge, and A.V. Demura, Phys. Rev. E 71, 036407 (2005)], results that the hydrogen lines formed in plasma with N e φ 10 16 cm -3 are asymmetrical. The inhomogeneity of ionic micro field and the higher order corrections (quadratic and next ones) in perturbation theory are the reason for such asymmetry. So far, the ion-emitter quadrupole interaction and the quadratic Stark effect have been included in calculations. The recent work shows that a significant discrepancy between calculations and measurements occurs in the wings of H-beta line in plasmas with cm -3 . It should be stressed here that e.g. for the energy operator the correction raised by the quadratic Stark effect is proportional to (where is the emitter-perturber distance) similarly as the correction caused by the emitter-perturber octupole interaction and the quadratic correction from emitter-perturber quadrupole interaction. Thus, it is obvious that a model of the profile calculation is consistent one if all the aforementioned corrections are simultaneously included. Such calculations are planned in the future paper. A statistics of the octupole inhomogeneity tensor in a plasma is necessarily needed in the first step of such calculations. For the first time the distribution functions of the octupole inhomogeneity have been calculated in this paper using the Mayer-Mayer cluster expansion method similarly as for the quadrupole function in the paper [J. Halenka, Z. Phys. D 16, 1 (1990)]. The quantity is the reduced scale of the micro field strength, where is the Holtsmark normal field and is the mean distance defined by the relationship, that is approximately equal to the mean ion-ion distance; whereas is the screening parameter, where is the electronic Debye radius. (author)

  1. Probability an introduction

    CERN Document Server

    Goldberg, Samuel

    1960-01-01

    Excellent basic text covers set theory, probability theory for finite sample spaces, binomial theorem, probability distributions, means, standard deviations, probability function of binomial distribution, more. Includes 360 problems with answers for half.

  2. Failures probability calculation of the energy supply of the Angra-1 reactor rods assembly

    International Nuclear Information System (INIS)

    Borba, P.R.

    1978-01-01

    This work analyses the electric power system of the Angra I PWR plant. It is demonstrated that this system is closely coupled with the safety engineering features, which are the equipments provided to prevent, limit, or mitigate the release of radioactive material and to permit the safe reactor shutdown. Event trees are used to analyse the operation of those systems which can lead to the release of radioactivity following a specified initial event. The fault trees technique is used to calculate the failure probability of the on-site electric power system [pt

  3. Sensitivity analysis of limit state functions for probability-based plastic design

    Science.gov (United States)

    Frangopol, D. M.

    1984-01-01

    The evaluation of the total probability of a plastic collapse failure P sub f for a highly redundant structure of random interdependent plastic moments acted on by random interdepedent loads is a difficult and computationally very costly process. The evaluation of reasonable bounds to this probability requires the use of second moment algebra which involves man statistical parameters. A computer program which selects the best strategy for minimizing the interval between upper and lower bounds of P sub f is now in its final stage of development. The relative importance of various uncertainties involved in the computational process on the resulting bounds of P sub f, sensitivity is analyzed. Response sensitivities for both mode and system reliability of an ideal plastic portal frame are shown.

  4. Protein distance constraints predicted by neural networks and probability density functions

    DEFF Research Database (Denmark)

    Lund, Ole; Frimand, Kenneth; Gorodkin, Jan

    1997-01-01

    We predict interatomic C-α distances by two independent data driven methods. The first method uses statistically derived probability distributions of the pairwise distance between two amino acids, whilst the latter method consists of a neural network prediction approach equipped with windows taki...... method based on the predicted distances is presented. A homepage with software, predictions and data related to this paper is available at http://www.cbs.dtu.dk/services/CPHmodels/...

  5. Estimation of the four-wave mixing noise probability-density function by the multicanonical Monte Carlo method.

    Science.gov (United States)

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2005-01-01

    The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.

  6. Fermi-Dirac function and energy gap

    OpenAIRE

    Bondarev, Boris

    2013-01-01

    Medium field method is applied for studying valence electron behavior in metals. When different wave-vector electrons are attracted at low temperatures, distribution function gets discontinued. As a result, a specific energy gap occurs.

  7. Economic modelling of energy services: Rectifying misspecified energy demand functions

    International Nuclear Information System (INIS)

    Hunt, Lester C.; Ryan, David L.

    2015-01-01

    estimation of an aggregate energy demand function for the UK with data over the period 1960–2011. - Highlights: • Introduces explicit modelling of demands for energy services • Derives estimable energy demand equations from energy service demands • Demonstrates the implicit misspecification with typical energy demand equations • Empirical implementation using aggregate and individual energy source data • Illustrative empirical example using UK data and energy efficiency modelling

  8. Problems in probability theory, mathematical statistics and theory of random functions

    CERN Document Server

    Sveshnikov, A A

    1979-01-01

    Problem solving is the main thrust of this excellent, well-organized workbook. Suitable for students at all levels in probability theory and statistics, the book presents over 1,000 problems and their solutions, illustrating fundamental theory and representative applications in the following fields: Random Events; Distribution Laws; Correlation Theory; Random Variables; Entropy & Information; Markov Processes; Systems of Random Variables; Limit Theorems; Data Processing; and more.The coverage of topics is both broad and deep, ranging from the most elementary combinatorial problems through lim

  9. Protein single-model quality assessment by feature-based probability density functions.

    Science.gov (United States)

    Cao, Renzhi; Cheng, Jianlin

    2016-04-04

    Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob.

  10. Maximizing the spectral and energy efficiency of ARQ with a fixed outage probability

    KAUST Repository

    Hadjtaieb, Amir; Chelli, Ali; Alouini, Mohamed-Slim

    2015-01-01

    This paper studies the spectral and energy efficiency of automatic repeat request (ARQ) in Nakagami-m block-fading channels. The source encodes each packet into L similar sequences and transmits them to the destination in the L subsequent time slots

  11. 76 FR 15268 - Guidelines for Determining Probability of Causation Under the Energy Employees Occupational...

    Science.gov (United States)

    2011-03-21

    ... radiation exposure and CLL mortality.'' \\26\\ Another limitation stems from the low incidence of CLL... treat chronic lymphocytic leukemia (CLL) as a radiogenic cancer under the Energy Employees Occupational... regulations in 2002, all types of cancers except for CLL are treated as being potentially caused by radiation...

  12. Random function representation of stationary stochastic vector processes for probability density evolution analysis of wind-induced structures

    Science.gov (United States)

    Liu, Zhangjun; Liu, Zenghui

    2018-06-01

    This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.

  13. Nonlocal kinetic-energy-density functionals

    International Nuclear Information System (INIS)

    Garcia-Gonzalez, P.; Alvarellos, J.E.; Chacon, E.

    1996-01-01

    In this paper we present nonlocal kinetic-energy functionals T[n] within the average density approximation (ADA) framework, which do not require any extra input when applied to any electron system and recover the exact kinetic energy and the linear response function of a homogeneous system. In contrast with previous ADA functionals, these present good behavior of the long-range tail of the exact weight function. The averaging procedure for the kinetic functional (averaging the Fermi momentum of the electron gas, instead of averaging the electron density) leads to a functional without numerical difficulties in the calculation of extended systems, and it gives excellent results when applied to atoms and jellium surfaces. copyright 1996 The American Physical Society

  14. Do key dimensions of seed and seedling functional trait variation capture variation in recruitment probability?

    Science.gov (United States)

    1. Plant functional traits provide a mechanistic basis for understanding ecological variation among plant species and the implications of this variation for species distribution, community assembly and restoration. 2. The bulk of our functional trait understanding, however, is centered on traits rel...

  15. Maximizing the spectral and energy efficiency of ARQ with a fixed outage probability

    KAUST Repository

    Hadjtaieb, Amir

    2015-10-05

    This paper studies the spectral and energy efficiency of automatic repeat request (ARQ) in Nakagami-m block-fading channels. The source encodes each packet into L similar sequences and transmits them to the destination in the L subsequent time slots. The destination combines the L sequences using maximal ratio combining and tries to decode the information. In case of decoding failure, the destination feeds back a negative acknowledgment and then the source sends the same L sequences to the destination. This process continues until successful decoding occurs at the destination with no limit on the number of retransmissions. We consider two optimization problems. In the first problem, we maximize the spectral efficiency of the system with respect to the rate for a fixed power. In the second problem, we maximize the energy efficiency with respect to the transmitted power for a fixed rate. © 2015 IEEE.

  16. Social representation for future teachers on the nuclear energy: probable implications of the public opinion

    International Nuclear Information System (INIS)

    Ayllon, Rafaella Menezes; Farias, Luciana Aparecida; Favaro, Deborah I.T.

    2013-01-01

    This study aimed to study the SR (social representation) regarding the 'Nuclear Energy' (NW) and 'Nuclear Chemistry' (NC) of students of Science - Bachelor of Federal University of Sao Paulo - UNIFESP. Individual questionnaires to research the topic, followed by the presentation of seminars with the focus of the research were applied. The methodology used was the technique of free word (Abric ,1994) which gives the frequency of each element that was mentioned and their average order of evocation, as well as semi -structured questionnaire with questions. Among the first results, it was found that the words 'Bomb' and 'Reactor' were the most mentioned by the group when asked evocations related to 'NE', while the terms 'Health' and 'Safety' are among the least mentioned. When referring to 'NC' the most frequent terms were 'Chemistry' and 'Atoms/Elements and 'Reactor' and 'Development' were less frequent. However, even though as a possible central core elements that match a negative SR theme, these students indicated Nuclear Energy as a strong option/option for diversifying the Brazilian energy matrix

  17. Collision Probability Analysis

    DEFF Research Database (Denmark)

    Hansen, Peter Friis; Pedersen, Preben Terndrup

    1998-01-01

    It is the purpose of this report to apply a rational model for prediction of ship-ship collision probabilities as function of the ship and the crew characteristics and the navigational environment for MS Dextra sailing on a route between Cadiz and the Canary Islands.The most important ship and crew...... characteristics are: ship speed, ship manoeuvrability, the layout of the navigational bridge, the radar system, the number and the training of navigators, the presence of a look out etc. The main parameters affecting the navigational environment are ship traffic density, probability distributions of wind speeds...... probability, i.e. a study of the navigator's role in resolving critical situations, a causation factor is derived as a second step.The report documents the first step in a probabilistic collision damage analysis. Future work will inlcude calculation of energy released for crushing of structures giving...

  18. Exponential functionals of Brownian motion, I: Probability laws at fixed time

    OpenAIRE

    Matsumoto, Hiroyuki; Yor, Marc

    2005-01-01

    This paper is the first part of our survey on various results about the distribution of exponential type Brownian functionals defined as an integral over time of geometric Brownian motion. Several related topics are also mentioned.

  19. Functional materials for energy-efficient buildings

    Science.gov (United States)

    Ebert, H.-P.

    2015-08-01

    The substantial improving of the energy efficiency is essential to meet the ambitious energy goals of the EU. About 40% of the European energy consumption belongs to the building sector. Therefore the reduction of the energy demand of the existing building stock is one of the key measures to deliver a substantial contribution to reduce CO2-emissions of our society. Buildings of the future have to be efficient in respect to energy consumption for construction and operation. Current research activities are focused on the development of functional materials with outstanding thermal and optical properties to provide, for example, slim thermally superinsulated facades, highly integrated heat storage systems or adaptive building components. In this context it is important to consider buildings as entities which fulfill energy and comfort claims as well as aesthetic aspects of a sustainable architecture.

  20. Functional materials for energy-efficient buildings

    Directory of Open Access Journals (Sweden)

    Ebert H.-P

    2015-01-01

    Full Text Available The substantial improving of the energy efficiency is essential to meet the ambitious energy goals of the EU. About 40% of the European energy consumption belongs to the building sector. Therefore the reduction of the energy demand of the existing building stock is one of the key measures to deliver a substantial contribution to reduce CO2-emissions of our society. Buildings of the future have to be efficient in respect to energy consumption for construction and operation. Current research activities are focused on the development of functional materials with outstanding thermal and optical properties to provide, for example, slim thermally superinsulated facades, highly integrated heat storage systems or adaptive building components. In this context it is important to consider buildings as entities which fulfill energy and comfort claims as well as aesthetic aspects of a sustainable architecture.

  1. SURFACE SYMMETRY ENERGY OF NUCLEAR ENERGY DENSITY FUNCTIONALS

    Energy Technology Data Exchange (ETDEWEB)

    Nikolov, N; Schunck, N; Nazarewicz, W; Bender, M; Pei, J

    2010-12-20

    We study the bulk deformation properties of the Skyrme nuclear energy density functionals. Following simple arguments based on the leptodermous expansion and liquid drop model, we apply the nuclear density functional theory to assess the role of the surface symmetry energy in nuclei. To this end, we validate the commonly used functional parametrizations against the data on excitation energies of superdeformed band-heads in Hg and Pb isotopes, and fission isomers in actinide nuclei. After subtracting shell effects, the results of our self-consistent calculations are consistent with macroscopic arguments and indicate that experimental data on strongly deformed configurations in neutron-rich nuclei are essential for optimizing future nuclear energy density functionals. The resulting survey provides a useful benchmark for further theoretical improvements. Unlike in nuclei close to the stability valley, whose macroscopic deformability hangs on the balance of surface and Coulomb terms, the deformability of neutron-rich nuclei strongly depends on the surface-symmetry energy; hence, its proper determination is crucial for the stability of deformed phases of the neutron-rich matter and description of fission rates for r-process nucleosynthesis.

  2. Determination of probability density functions for parameters in the Munson-Dawson model for creep behavior of salt

    International Nuclear Information System (INIS)

    Pfeifle, T.W.; Mellegard, K.D.; Munson, D.E.

    1992-10-01

    The modified Munson-Dawson (M-D) constitutive model that describes the creep behavior of salt will be used in performance assessment calculations to assess compliance of the Waste Isolation Pilot Plant (WIPP) facility with requirements governing the disposal of nuclear waste. One of these standards requires that the uncertainty of future states of the system, material model parameters, and data be addressed in the performance assessment models. This paper presents a method in which measurement uncertainty and the inherent variability of the material are characterized by treating the M-D model parameters as random variables. The random variables can be described by appropriate probability distribution functions which then can be used in Monte Carlo or structural reliability analyses. Estimates of three random variables in the M-D model were obtained by fitting a scalar form of the model to triaxial compression creep data generated from tests of WIPP salt. Candidate probability distribution functions for each of the variables were then fitted to the estimates and their relative goodness-of-fit tested using the Kolmogorov-Smirnov statistic. A sophisticated statistical software package obtained from BMDP Statistical Software, Inc. was used in the M-D model fitting. A separate software package, STATGRAPHICS, was used in fitting the candidate probability distribution functions to estimates of the variables. Skewed distributions, i.e., lognormal and Weibull, were found to be appropriate for the random variables analyzed

  3. The Probability of Neonatal Respiratory Distress Syndrome as a Function of Gestational Age and Lecithin/Sphingomyelin Ratio

    Science.gov (United States)

    St. Clair, Caryn; Norwitz, Errol R.; Woensdregt, Karlijn; Cackovic, Michael; Shaw, Julia A.; Malkus, Herbert; Ehrenkranz, Richard A.; Illuzzi, Jessica L.

    2011-01-01

    We sought to define the risk of neonatal respiratory distress syndrome (RDS) as a function of both lecithin/sphingomyelin (L/S) ratio and gestational age. Amniotic fluid L/S ratio data were collected from consecutive women undergoing amniocentesis for fetal lung maturity at Yale-New Haven Hospital from January 1998 to December 2004. Women were included in the study if they delivered a live-born, singleton, nonanomalous infant within 72 hours of amniocentesis. The probability of RDS was modeled using multivariate logistic regression with L/S ratio and gestational age as predictors. A total of 210 mother-neonate pairs (8 RDS, 202 non-RDS) met criteria for analysis. Both gestational age and L/S ratio were independent predictors of RDS. A probability of RDS of 3% or less was noted at an L/S ratio cutoff of ≥3.4 at 34 weeks, ≥2.6 at 36 weeks, ≥1.6 at 38 weeks, and ≥1.2 at term. Under 34 weeks of gestation, the prevalence of RDS was so high that a probability of 3% or less was not observed by this model. These data describe a means of stratifying the probability of neonatal RDS using both gestational age and the L/S ratio and may aid in clinical decision making concerning the timing of delivery. PMID:18773379

  4. An Improvement to DCPT: The Particle Transfer Probability as a Function of Particle's Age

    International Nuclear Information System (INIS)

    L. Pan; G. S. Bodvarsson

    2001-01-01

    Multi-scale features of transport processes in fractured porous media make numerical modeling a difficult task of both conceptualization and computation. Dual-continuum particle tracker (DCPT) is an attractive method for modeling large-scale problems typically encountered in the field, such as those in unsaturated zone (UZ) of Yucca Mountain, Nevada. The major advantage is its capability to capture the major features of flow and transport in fractured porous rock (i-e., a fast fracture sub-system combined with a slow matrix sub-system) with reasonable computational resources. However, like other conventional dual-continuum approach-based numerical methods, DCPT (v1.0) is often criticized for failing to capture the transient features of the diffusion depth into the matrix. It may overestimate the transport of tracers through the fractures, especially for the cases with large fracture spacing, and predict artificial early breakthroughs. The objective of this study is to develop a new theory for calculating the particle transfer probability to captures the transient features of the diffusion depth into the matrix within the framework of the dual-continuum random walk particle method (RWPM)

  5. Characterizing the Lyman-alpha forest flux probability distribution function using Legendre polynomials

    Science.gov (United States)

    Cieplak, Agnieszka; Slosar, Anze

    2018-01-01

    The Lyman-alpha forest has become a powerful cosmological probe at intermediate redshift. It is a highly non-linear field with much information present beyond the power spectrum. The flux probability flux distribution (PDF) in particular has been a successful probe of small scale physics. However, it is also sensitive to pixel noise, spectrum resolution, and continuum fitting, all of which lead to possible biased estimators. Here we argue that measuring the coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. Since the n-th Legendre coefficient can be expressed as a linear combination of the first n moments of the field, this allows for the coefficients to be measured in the presence of noise and allows for a clear route towards marginalization over the mean flux. Additionally, in the presence of noise, a finite number of these coefficients are well measured with a very sharp transition into noise dominance. This compresses the information into a small amount of well-measured quantities. Finally, we find that measuring fewer quasars with high signal-to-noise produces a higher amount of recoverable information.

  6. Energy harvesting with functional materials and microsystems

    CERN Document Server

    Bhaskaran, Madhu; Iniewski, Krzysztof

    2013-01-01

    For decades, people have searched for ways to harvest energy from natural sources. Lately, a desire to address the issue of global warming and climate change has popularized solar or photovoltaic technology, while piezoelectric technology is being developed to power handheld devices without batteries, and thermoelectric technology is being explored to convert wasted heat, such as in automobile engine combustion, into electricity. Featuring contributions from international researchers in both academics and industry, Energy Harvesting with Functional Materials and Microsystems explains the growi

  7. Damage energy functions for compounds and alloys

    International Nuclear Information System (INIS)

    Parkin, D.M.; Coulter, C.A.

    1977-01-01

    The concept of the damage energy of an energetic primary knock-on atom in a material is a central component in the procedure used to calculate dpa for metals exposed to neutron and charged particle radiation. Coefficients for analytic fits to the calculated damage energy functions are given for Al 2 O 3 , Si 3 N 4 , Y 2 O 3 , and NbTi. Damage efficiencies are given for Al 2 O 3

  8. Cognitive Functioning and the Probability of Falls among Seniors in Havana, Cuba

    Science.gov (United States)

    Trujillo, Antonio J.; Hyder, Adnan A.; Steinhardt, Laura C.

    2011-01-01

    This article explores the connection between cognitive functioning and falls among seniors (greater than or equal to 60 years of age) in Havana, Cuba, after controlling for observable characteristics. Using the SABE (Salud, Bienestar, and Envejecimiento) cross-sectional database, we used an econometric strategy that takes advantage of available…

  9. Fusing probability density function into Dempster-Shafer theory of evidence for the evaluation of water treatment plant.

    Science.gov (United States)

    Chowdhury, Shakhawat

    2013-05-01

    The evaluation of the status of a municipal drinking water treatment plant (WTP) is important. The evaluation depends on several factors, including, human health risks from disinfection by-products (R), disinfection performance (D), and cost (C) of water production and distribution. The Dempster-Shafer theory (DST) of evidence can combine the individual status with respect to R, D, and C to generate a new indicator, from which the overall status of a WTP can be evaluated. In the DST, the ranges of different factors affecting the overall status are divided into several segments. The basic probability assignments (BPA) for each segment of these factors are provided by multiple experts, which are then combined to obtain the overall status. In assigning the BPA, the experts use their individual judgments, which can impart subjective biases in the overall evaluation. In this research, an approach has been introduced to avoid the assignment of subjective BPA. The factors contributing to the overall status were characterized using the probability density functions (PDF). The cumulative probabilities for different segments of these factors were determined from the cumulative density function, which were then assigned as the BPA for these factors. A case study is presented to demonstrate the application of PDF in DST to evaluate a WTP, leading to the selection of the required level of upgradation for the WTP.

  10. Neuroenergetics: How energy constraints shape brain function

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The nervous system consumes a disproportionate fraction of the resting body’s energy production. In humans, the brain represents 2% of the body’s mass, yet it accounts for ~20% of the total oxygen consumption. Expansion in the size of the brain relative to the body and an increase in the number of connections between neurons during evolution underpin our cognitive powers and are responsible for our brains’ high metabolic rate. The molecules at the center of cellular energy metabolism also act as intercellular signals and constitute an important communication pathway, coordinating for instance the immune surveillance of the brain. Despite the significance of energy consumption in the nervous system, how energy constrains and shapes brain function is often under appreciated. I will illustrate the importance of brain energetics and metabolism with two examples from my recent work. First, I will show how the brain trades information for energy savings in the visual pathway. Indeed, a significant fraction ...

  11. The Lateral Trigger Probability function for UHE Cosmic Rays Showers detected by the Pierre Auger Observatory

    Czech Academy of Sciences Publication Activity Database

    Abreu, P.; Aglietta, M.; Ahn, E.J.; Boháčová, Martina; Chudoba, Jiří; Ebr, Jan; Mandát, Dušan; Nečesal, Petr; Nožka, Libor; Nyklíček, Michal; Palatka, Miroslav; Pech, Miroslav; Prouza, Michael; Řídký, Jan; Schovancová, Jaroslava; Schovánek, Petr; Šmída, Radomír; Trávníček, Petr; Vícha, Jakub

    2011-01-01

    Roč. 35, č. 5 (2011), 266-276 ISSN 0927-6505 R&D Projects: GA MŠk LC527; GA MŠk(CZ) 1M06002; GA AV ČR KJB100100904; GA MŠk(CZ) LA08016 Institutional research plan: CEZ:AV0Z10100502; CEZ:AV0Z10100522 Keywords : trigger * cosmic ray shower s Subject RIV: BF - Elementary Particles and High Energy Physics Impact factor: 3.216, year: 2011 http://www.auger.org/technical_info/pdfs/PerroneLTP_Published.pdf

  12. A new formula for normal tissue complication probability (NTCP) as a function of equivalent uniform dose (EUD).

    Science.gov (United States)

    Luxton, Gary; Keall, Paul J; King, Christopher R

    2008-01-07

    To facilitate the use of biological outcome modeling for treatment planning, an exponential function is introduced as a simpler equivalent to the Lyman formula for calculating normal tissue complication probability (NTCP). The single parameter of the exponential function is chosen to reproduce the Lyman calculation to within approximately 0.3%, and thus enable easy conversion of data contained in empirical fits of Lyman parameters for organs at risk (OARs). Organ parameters for the new formula are given in terms of Lyman model m and TD(50), and conversely m and TD(50) are expressed in terms of the parameters of the new equation. The role of the Lyman volume-effect parameter n is unchanged from its role in the Lyman model. For a non-homogeneously irradiated OAR, an equation relates d(ref), n, v(eff) and the Niemierko equivalent uniform dose (EUD), where d(ref) and v(eff) are the reference dose and effective fractional volume of the Kutcher-Burman reduction algorithm (i.e. the LKB model). It follows in the LKB model that uniform EUD irradiation of an OAR results in the same NTCP as the original non-homogeneous distribution. The NTCP equation is therefore represented as a function of EUD. The inverse equation expresses EUD as a function of NTCP and is used to generate a table of EUD versus normal tissue complication probability for the Emami-Burman parameter fits as well as for OAR parameter sets from more recent data.

  13. A new formula for normal tissue complication probability (NTCP) as a function of equivalent uniform dose (EUD)

    International Nuclear Information System (INIS)

    Luxton, Gary; Keall, Paul J; King, Christopher R

    2008-01-01

    To facilitate the use of biological outcome modeling for treatment planning, an exponential function is introduced as a simpler equivalent to the Lyman formula for calculating normal tissue complication probability (NTCP). The single parameter of the exponential function is chosen to reproduce the Lyman calculation to within ∼0.3%, and thus enable easy conversion of data contained in empirical fits of Lyman parameters for organs at risk (OARs). Organ parameters for the new formula are given in terms of Lyman model m and TD 50 , and conversely m and TD 50 are expressed in terms of the parameters of the new equation. The role of the Lyman volume-effect parameter n is unchanged from its role in the Lyman model. For a non-homogeneously irradiated OAR, an equation relates d ref , n, v eff and the Niemierko equivalent uniform dose (EUD), where d ref and v eff are the reference dose and effective fractional volume of the Kutcher-Burman reduction algorithm (i.e. the LKB model). It follows in the LKB model that uniform EUD irradiation of an OAR results in the same NTCP as the original non-homogeneous distribution. The NTCP equation is therefore represented as a function of EUD. The inverse equation expresses EUD as a function of NTCP and is used to generate a table of EUD versus normal tissue complication probability for the Emami-Burman parameter fits as well as for OAR parameter sets from more recent data

  14. Uncertainty of Hydrological Drought Characteristics with Copula Functions and Probability Distributions: A Case Study of Weihe River, China

    Directory of Open Access Journals (Sweden)

    Panpan Zhao

    2017-05-01

    Full Text Available This study investigates the sensitivity and uncertainty of hydrological droughts frequencies and severity in the Weihe Basin, China during 1960–2012, by using six commonly used univariate probability distributions and three Archimedean copulas to fit the marginal and joint distributions of drought characteristics. The Anderson-Darling method is used for testing the goodness-of-fit of the univariate model, and the Akaike information criterion (AIC is applied to select the best distribution and copula functions. The results demonstrate that there is a very strong correlation between drought duration and drought severity in three stations. The drought return period varies depending on the selected marginal distributions and copula functions and, with an increase of the return period, the differences become larger. In addition, the estimated return periods (both co-occurrence and joint from the best-fitted copulas are the closet to those from empirical distribution. Therefore, it is critical to select the appropriate marginal distribution and copula function to model the hydrological drought frequency and severity. The results of this study can not only help drought investigation to select a suitable probability distribution and copulas function, but are also useful for regional water resource management. However, a few limitations remain in this study, such as the assumption of stationary of runoff series.

  15. Model-based prognostics for batteries which estimates useful life and uses a probability density function

    Science.gov (United States)

    Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor)

    2012-01-01

    This invention develops a mathematical model to describe battery behavior during individual discharge cycles as well as over its cycle life. The basis for the form of the model has been linked to the internal processes of the battery and validated using experimental data. Effects of temperature and load current have also been incorporated into the model. Subsequently, the model has been used in a Particle Filtering framework to make predictions of remaining useful life for individual discharge cycles as well as for cycle life. The prediction performance was found to be satisfactory as measured by performance metrics customized for prognostics for a sample case. The work presented here provides initial steps towards a comprehensive health management solution for energy storage devices.

  16. Functional data analysis of sleeping energy expenditure

    Science.gov (United States)

    Adequate sleep is crucial during childhood for metabolic health, and physical and cognitive development. Inadequate sleep can disrupt metabolic homeostasis and alter sleeping energy expenditure (SEE). Functional data analysis methods were applied to SEE data to elucidate the population structure of ...

  17. Analysis of the Bogoliubov free energy functional

    DEFF Research Database (Denmark)

    Reuvers, Robin

    In this thesis, we analyse a variational reformulation of the Bogoliubov approximation that is used to describe weakly-interacting translationally-invariant Bose gases. For the resulting model, the `Bogoliubov free energy functional', we demonstrate existence of minimizers as well as the presence...

  18. Electron energy-distribution functions in gases

    International Nuclear Information System (INIS)

    Pitchford, L.C.

    1981-01-01

    Numerical calculation of the electron energy distribution functions in the regime of drift tube experiments is discussed. The discussion is limited to constant applied fields and values of E/N (ratio of electric field strength to neutral density) low enough that electron growth due to ionization can be neglected

  19. Probability and Cumulative Density Function Methods for the Stochastic Advection-Reaction Equation

    Energy Technology Data Exchange (ETDEWEB)

    Barajas-Solano, David A.; Tartakovsky, Alexandre M.

    2018-01-01

    We present a cumulative density function (CDF) method for the probabilistic analysis of $d$-dimensional advection-dominated reactive transport in heterogeneous media. We employ a probabilistic approach in which epistemic uncertainty on the spatial heterogeneity of Darcy-scale transport coefficients is modeled in terms of random fields with given correlation structures. Our proposed CDF method employs a modified Large-Eddy-Diffusivity (LED) approach to close and localize the nonlocal equations governing the one-point PDF and CDF of the concentration field, resulting in a $(d + 1)$ dimensional PDE. Compared to the classsical LED localization, the proposed modified LED localization explicitly accounts for the mean-field advective dynamics over the phase space of the PDF and CDF. To illustrate the accuracy of the proposed closure, we apply our CDF method to one-dimensional single-species reactive transport with uncertain, heterogeneous advection velocities and reaction rates modeled as random fields.

  20. Continuous time random walk model with asymptotical probability density of waiting times via inverse Mittag-Leffler function

    Science.gov (United States)

    Liang, Yingjie; Chen, Wen

    2018-04-01

    The mean squared displacement (MSD) of the traditional ultraslow diffusion is a logarithmic function of time. Recently, the continuous time random walk model is employed to characterize this ultraslow diffusion dynamics by connecting the heavy-tailed logarithmic function and its variation as the asymptotical waiting time density. In this study we investigate the limiting waiting time density of a general ultraslow diffusion model via the inverse Mittag-Leffler function, whose special case includes the traditional logarithmic ultraslow diffusion model. The MSD of the general ultraslow diffusion model is analytically derived as an inverse Mittag-Leffler function, and is observed to increase even more slowly than that of the logarithmic function model. The occurrence of very long waiting time in the case of the inverse Mittag-Leffler function has the largest probability compared with the power law model and the logarithmic function model. The Monte Carlo simulations of one dimensional sample path of a single particle are also performed. The results show that the inverse Mittag-Leffler waiting time density is effective in depicting the general ultraslow random motion.

  1. Estimation of probability density functions of damage parameter for valve leakage detection in reciprocating pump used in nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Kyeom; Kim, Tae Yun; Kim, Hyun Su; Chai, Jang Bom; Lee, Jin Woo [Div. of Mechanical Engineering, Ajou University, Suwon (Korea, Republic of)

    2016-10-15

    This paper presents an advanced estimation method for obtaining the probability density functions of a damage parameter for valve leakage detection in a reciprocating pump. The estimation method is based on a comparison of model data which are simulated by using a mathematical model, and experimental data which are measured on the inside and outside of the reciprocating pump in operation. The mathematical model, which is simplified and extended on the basis of previous models, describes not only the normal state of the pump, but also its abnormal state caused by valve leakage. The pressure in the cylinder is expressed as a function of the crankshaft angle, and an additional volume flow rate due to the valve leakage is quantified by a damage parameter in the mathematical model. The change in the cylinder pressure profiles due to the suction valve leakage is noticeable in the compression and expansion modes of the pump. The damage parameter value over 300 cycles is calculated in two ways, considering advance or delay in the opening and closing angles of the discharge valves. The probability density functions of the damage parameter are compared for diagnosis and prognosis on the basis of the probabilistic features of valve leakage.

  2. Estimation of probability density functions of damage parameter for valve leakage detection in reciprocating pump used in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Jong Kyeom; Kim, Tae Yun; Kim, Hyun Su; Chai, Jang Bom; Lee, Jin Woo

    2016-01-01

    This paper presents an advanced estimation method for obtaining the probability density functions of a damage parameter for valve leakage detection in a reciprocating pump. The estimation method is based on a comparison of model data which are simulated by using a mathematical model, and experimental data which are measured on the inside and outside of the reciprocating pump in operation. The mathematical model, which is simplified and extended on the basis of previous models, describes not only the normal state of the pump, but also its abnormal state caused by valve leakage. The pressure in the cylinder is expressed as a function of the crankshaft angle, and an additional volume flow rate due to the valve leakage is quantified by a damage parameter in the mathematical model. The change in the cylinder pressure profiles due to the suction valve leakage is noticeable in the compression and expansion modes of the pump. The damage parameter value over 300 cycles is calculated in two ways, considering advance or delay in the opening and closing angles of the discharge valves. The probability density functions of the damage parameter are compared for diagnosis and prognosis on the basis of the probabilistic features of valve leakage

  3. Estimation of Probability Density Functions of Damage Parameter for Valve Leakage Detection in Reciprocating Pump Used in Nuclear Power Plants

    Directory of Open Access Journals (Sweden)

    Jong Kyeom Lee

    2016-10-01

    Full Text Available This paper presents an advanced estimation method for obtaining the probability density functions of a damage parameter for valve leakage detection in a reciprocating pump. The estimation method is based on a comparison of model data which are simulated by using a mathematical model, and experimental data which are measured on the inside and outside of the reciprocating pump in operation. The mathematical model, which is simplified and extended on the basis of previous models, describes not only the normal state of the pump, but also its abnormal state caused by valve leakage. The pressure in the cylinder is expressed as a function of the crankshaft angle, and an additional volume flow rate due to the valve leakage is quantified by a damage parameter in the mathematical model. The change in the cylinder pressure profiles due to the suction valve leakage is noticeable in the compression and expansion modes of the pump. The damage parameter value over 300 cycles is calculated in two ways, considering advance or delay in the opening and closing angles of the discharge valves. The probability density functions of the damage parameter are compared for diagnosis and prognosis on the basis of the probabilistic features of valve leakage.

  4. A method for ion distribution function evaluation using escaping neutral atom kinetic energy samples

    International Nuclear Information System (INIS)

    Goncharov, P.R.; Ozaki, T.; Veshchev, E.A.; Sudo, S.

    2008-01-01

    A reliable method to evaluate the probability density function for escaping atom kinetic energies is required for the analysis of neutral particle diagnostic data used to study the fast ion distribution function in fusion plasmas. Digital processing of solid state detector signals is proposed in this paper as an improvement of the simple histogram approach. Probability density function for kinetic energies of neutral particles escaping from the plasma has been derived in a general form taking into account the plasma ion energy distribution, electron capture and loss rates, superposition along the diagnostic sight line and the magnetic surface geometry. A pseudorandom number generator has been realized that enables a sample of escaping neutral particle energies to be simulated for given plasma parameters and experimental conditions. Empirical probability density estimation code has been developed and tested to reconstruct the probability density function from simulated samples assuming. Maxwellian and classical slowing down plasma ion energy distribution shapes for different temperatures and different slowing down times. The application of the developed probability density estimation code to the analysis of experimental data obtained by the novel Angular-Resolved Multi-Sightline Neutral Particle Analyzer has been studied to obtain the suprathermal particle distributions. The optimum bandwidth parameter selection algorithm has also been realized. (author)

  5. Quantum probability measures and tomographic probability densities

    NARCIS (Netherlands)

    Amosov, GG; Man'ko, [No Value

    2004-01-01

    Using a simple relation of the Dirac delta-function to generalized the theta-function, the relationship between the tomographic probability approach and the quantum probability measure approach with the description of quantum states is discussed. The quantum state tomogram expressed in terms of the

  6. Functional derivative of noninteracting kinetic energy density functional

    International Nuclear Information System (INIS)

    Liu Shubin; Ayers, Paul W.

    2004-01-01

    Proofs from different theoretical frameworks, namely, the Hohenbergh-Kohn theorems, the Kohn-Sham scheme, and the first-order density matrix representation, have been presented in this paper to show that the functional derivative of the noninteracting kinetic energy density functional can uniquely be expressed as the negative of the Kohn-Sham effective potential, arbitrary only to an additive orbital-independent constant. Key points leading to the current result as well as confusion about the quantity in the literature are briefly discussed

  7. Functional Carbon Materials for Electrochemical Energy Storage

    Science.gov (United States)

    Zhou, Huihui

    The ability to harvest and convert solar energy has been associated with the evolution of human civilization. The increasing consumption of fossil fuels since the industrial revolution, however, has brought to concerns in ecological deterioration and depletion of the fossil fuels. Facing these challenges, humankind is forced to seek for clean, sustainable and renewable energy resources, such as biofuels, hydraulic power, wind power, geothermal energy and other kinds of alternative energies. However, most alternative energy sources, generally in the form of electrical energy, could not be made available on a continuous basis. It is, therefore, essential to store such energy into chemical energy, which are portable and various applications. In this context, electrochemical energy-storage devices hold great promises towards this goal. The most common electrochemical energy-storage devices are electrochemical capacitors (ECs, also called supercapacitors) and batteries. In comparison to batteries, ECs posses high power density, high efficiency, long cycling life and low cost. ECs commonly utilize carbon as both (symmetric) or one of the electrodes (asymmetric), of which their performance is generally limited by the capacitance of the carbon electrodes. Therefore, developing better carbon materials with high energy density has been emerging as one the most essential challenges in the field. The primary objective of this dissertation is to design and synthesize functional carbon materials with high energy density at both aqueous and organic electrolyte systems. The energy density (E) of ECs are governed by E = CV 2/2, where C is the total capacitance and V is the voltage of the devices. Carbon electrodes with high capacitance and high working voltage should lead to high energy density. In the first part of this thesis, a new class of nanoporous carbons were synthesized for symmetric supercapacitors using aqueous Li2SO4 as the electrolyte. A unique precursor was adopted to

  8. Simple relations for the excitation energies E2 and the transition probabilities B (E2) of neighboring doubly even nuclides

    International Nuclear Information System (INIS)

    Patnaik, R.; Patra, R.; Satpathy, L.

    1975-01-01

    For even-even nuclei, the excitation energy E2 and the reduced transition probability B (E2) between the ground state and the first excited 2 + state have been considered. On the basis of different models, it is shown that for a nucleus N, Z the relations E2N, Z + E2N + 2,Z + 2 - E2N + 2, Z - E2N, Z + 2 approx. = 0 and B (E2)N, Z + B (E2)N + 2,Z + 2 - B (E2)N + 2,Z - B (E2)N, Z + 2 approx. = 0 hold good, except in certain specified regions. The goodness of these difference equations is tested with the available experimental data. The difference equation of Ross and Bhaduri is shown to follow from our approach. Some predictions of unmeasured E2 and B (E2) values have been made

  9. Evaluation of Presumed Probability-Density-Function Models in Non-Premixed Flames by using Large Eddy Simulation

    International Nuclear Information System (INIS)

    Cao Hong-Jun; Zhang Hui-Qiang; Lin Wen-Yi

    2012-01-01

    Four kinds of presumed probability-density-function (PDF) models for non-premixed turbulent combustion are evaluated in flames with various stoichiometric mixture fractions by using large eddy simulation (LES). The LES code is validated by the experimental data of a classical turbulent jet flame (Sandia flame D). The mean and rms temperatures obtained by the presumed PDF models are compared with the LES results. The β-function model achieves a good prediction for different flames. The predicted rms temperature by using the double-δ function model is very small and unphysical in the vicinity of the maximum mean temperature. The clip-Gaussian model and the multi-δ function model make a worse prediction of the extremely fuel-rich or fuel-lean side due to the clip at the boundary of the mixture fraction space. The results also show that the overall prediction performance of presumed PDF models is better at mediate stoichiometric mixture fractions than that at very small or very large ones. (fundamental areas of phenomenology(including applications))

  10. Functional materials discovery using energy-structure-function maps.

    Science.gov (United States)

    Pulido, Angeles; Chen, Linjiang; Kaczorowski, Tomasz; Holden, Daniel; Little, Marc A; Chong, Samantha Y; Slater, Benjamin J; McMahon, David P; Bonillo, Baltasar; Stackhouse, Chloe J; Stephenson, Andrew; Kane, Christopher M; Clowes, Rob; Hasell, Tom; Cooper, Andrew I; Day, Graeme M

    2017-03-30

    Molecular crystals cannot be designed in the same manner as macroscopic objects, because they do not assemble according to simple, intuitive rules. Their structures result from the balance of many weak interactions, rather than from the strong and predictable bonding patterns found in metal-organic frameworks and covalent organic frameworks. Hence, design strategies that assume a topology or other structural blueprint will often fail. Here we combine computational crystal structure prediction and property prediction to build energy-structure-function maps that describe the possible structures and properties that are available to a candidate molecule. Using these maps, we identify a highly porous solid, which has the lowest density reported for a molecular crystal so far. Both the structure of the crystal and its physical properties, such as methane storage capacity and guest-molecule selectivity, are predicted using the molecular structure as the only input. More generally, energy-structure-function maps could be used to guide the experimental discovery of materials with any target function that can be calculated from predicted crystal structures, such as electronic structure or mechanical properties.

  11. Conserving relativistic many-body approach: Equation of state, spectral function, and occupation probabilities of nuclear matter

    International Nuclear Information System (INIS)

    de Jong, F.; Malfliet, R.

    1991-01-01

    Starting from a relativistic Lagrangian we derive a ''conserving'' approximation for the description of nuclear matter. We show this to be a nontrivial extension over the relativistic Dirac-Brueckner scheme. The saturation point of the equation of state calculated agrees very well with the empirical saturation point. The conserving character of the approach is tested by means of the Hugenholtz--van Hove theorem. We find the theorem fulfilled very well around saturation. A new value for compression modulus is derived, K=310 MeV. Also we calculate the occupation probabilities at normal nuclear matter densities by means of the spectral function. The average depletion κ of the Fermi sea is found to be κ∼0.11

  12. Autoregressive processes with exponentially decaying probability distribution functions: applications to daily variations of a stock market index.

    Science.gov (United States)

    Porto, Markus; Roman, H Eduardo

    2002-04-01

    We consider autoregressive conditional heteroskedasticity (ARCH) processes in which the variance sigma(2)(y) depends linearly on the absolute value of the random variable y as sigma(2)(y) = a+b absolute value of y. While for the standard model, where sigma(2)(y) = a + b y(2), the corresponding probability distribution function (PDF) P(y) decays as a power law for absolute value of y-->infinity, in the linear case it decays exponentially as P(y) approximately exp(-alpha absolute value of y), with alpha = 2/b. We extend these results to the more general case sigma(2)(y) = a+b absolute value of y(q), with 0 history of the ARCH process is taken into account, the resulting PDF becomes a stretched exponential even for q = 1, with a stretched exponent beta = 2/3, in a much better agreement with the empirical data.

  13. Expressions for neutrino wave functions and transition probabilities at three-neutrino oscillations in vacuum and some of their applications

    International Nuclear Information System (INIS)

    Beshtoev, Kh.M.

    2006-01-01

    I have considered three-neutrino vacuum transitions and oscillations in the general case and obtained expressions for neutrino wave functions in three cases: with CP violation, without CP violation and in the case when direct ν e - ν τ transitions are absent β(θ 13 ) = 0 (some works indicate this possibility). Then using the existing experimental data some analysis has been fulfilled. This analysis definitely has shown that direct transitions ν e - ν τ cannot be closed for the Solar neutrinos, i. e., β(θ 13 ) ≠ 0. It is also shown that the possibility that β(θ 13 ) = 0 cannot be realized by using the mechanism of resonance enhancement of neutrino oscillations in matter (the Sun). It was found out that the probability of ν e - ν e neutrino transitions is a positive defined value, if in reality neutrino oscillations take place, only if the angle of ν e , ν τ mixing β ≤ 15 - 17 deg

  14. Application of probability generating function to the essentials of nondestructive nuclear materials assay system using neutron correlation

    International Nuclear Information System (INIS)

    Hosoma, Takashi

    2017-01-01

    In the previous research (JAEA-Research 2015-009), essentials of neutron multiplicity counting mathematics were reconsidered where experiences obtained at the Plutonium Conversion Development Facility were taken into, and formulae of multiplicity distribution were algebraically derived up to septuplet using a probability generating function to make a strategic move in the future. Its principle was reported by K. Böhnel in 1985, but such a high-order expansion was the first case due to its increasing complexity. In this research, characteristics of the high-order correlation were investigated. It was found that higher-order correlation increases rapidly in response to the increase of leakage multiplication, crosses and leaves lower-order correlations behind, when leakage multiplication is > 1.3 that depends on detector efficiency and counter setting. In addition, fission rates and doubles count rates by fast neutron and by thermal neutron in their coexisting system were algebraically derived using a probability generating function again. Its principle was reported by I. Pázsit and L. Pál in 2012, but such a physical interpretation, i.e. associating their stochastic variables with fission rate, doubles count rate and leakage multiplication, is the first case. From Rossi-alpha combined distribution and measured ratio of each area obtained by Differential Die-Away Self-Interrogation (DDSI) and conventional assay data, it is possible to estimate: the number of induced fissions per unit time by fast neutron and by thermal neutron; the number of induced fissions (< 1) by one source neutron; and individual doubles count rates. During the research, a hypothesis introduced in their report was proved to be true. Provisional calculations were done for UO_2 of 1∼10 kgU containing ∼ 0.009 wt% "2"4"4Cm. (author)

  15. The Bogoliubov free energy functional II

    DEFF Research Database (Denmark)

    Napiórkowski, Marcin; Reuvers, Robin; Solovej, Jan Philip

    2018-01-01

    We analyse the canonical Bogoliubov free energy functional at low temperatures in the dilute limit. We prove existence of a first order phase transition and, in the limit $a_0\\to a$, we determine the critical temperature to be $T_{\\rm{c}}=T_{\\rm{fc}}(1+1.49(\\rho^{1/3}a))$ to leading order. Here, $T......_{\\rm{fc}}$ is the critical temperature of the free Bose gas, $\\rho$ is the density of the gas, $a$ is the scattering length of the pair-interaction potential $V$, and $a_0=(8\\pi)^{-1}\\widehat{V}(0)$ its first order approximation. We also prove asymptotic expansions for the free energy. In particular, we recover the Lee...

  16. Functional data analysis of sleeping energy expenditure.

    Science.gov (United States)

    Lee, Jong Soo; Zakeri, Issa F; Butte, Nancy F

    2017-01-01

    Adequate sleep is crucial during childhood for metabolic health, and physical and cognitive development. Inadequate sleep can disrupt metabolic homeostasis and alter sleeping energy expenditure (SEE). Functional data analysis methods were applied to SEE data to elucidate the population structure of SEE and to discriminate SEE between obese and non-obese children. Minute-by-minute SEE in 109 children, ages 5-18, was measured in room respiration calorimeters. A smoothing spline method was applied to the calorimetric data to extract the true smoothing function for each subject. Functional principal component analysis was used to capture the important modes of variation of the functional data and to identify differences in SEE patterns. Combinations of functional principal component analysis and classifier algorithm were used to classify SEE. Smoothing effectively removed instrumentation noise inherent in the room calorimeter data, providing more accurate data for analysis of the dynamics of SEE. SEE exhibited declining but subtly undulating patterns throughout the night. Mean SEE was markedly higher in obese than non-obese children, as expected due to their greater body mass. SEE was higher among the obese than non-obese children (p0.1, after post hoc testing). Functional principal component scores for the first two components explained 77.8% of the variance in SEE and also differed between groups (p = 0.037). Logistic regression, support vector machine or random forest classification methods were able to distinguish weight-adjusted SEE between obese and non-obese participants with good classification rates (62-64%). Our results implicate other factors, yet to be uncovered, that affect the weight-adjusted SEE of obese and non-obese children. Functional data analysis revealed differences in the structure of SEE between obese and non-obese children that may contribute to disruption of metabolic homeostasis.

  17. Time-averaged probability density functions of soot nanoparticles along the centerline of a piloted turbulent diffusion flame using a scanning mobility particle sizer

    KAUST Repository

    Chowdhury, Snehaunshu; Boyette, Wesley; Roberts, William L.

    2017-01-01

    In this study, we demonstrate the use of a scanning mobility particle sizer (SMPS) as an effective tool to measure the probability density functions (PDFs) of soot nanoparticles in turbulent flames. Time-averaged soot PDFs necessary for validating

  18. High-altitude cosmic ray neutrons: probable source for the high-energy protons of the earth's radiation belts

    International Nuclear Information System (INIS)

    Hajnal, F.; Wilson, J.

    1992-01-01

    'Full Text:' Several High-altitude cosmic-ray neutron measurements were performed by the NASA Ames Laboratory in the mid-to late-1970s using airplanes flying at about 13km altitude along constant geomagnetic latitudes of 20, 44 and 51 degrees north. Bonner spheres and manganese, gold and aluminium foils were used in the measurements. In addition, large moderated BF-3 counters served as normalizing instruments. Data analyses performed at that time did not provide complete and unambiguous spectral information and field intensities. Recently, using our new unfolding methods and codes, and Bonner-sphere response function extensions for higher energies, 'new' neutron spectral intensities were obtained, which show progressive hardening of neutron spectra as a function of increasing geomagnetic latitude, with substantial increases in the energy region iron, 1 0 MeV to 10 GeV. For example, we found that the total neutron fluences at 20 and 51 degrees magnetic north are in the ratio of 1 to 5.2 and the 10 MeV to 10 GeV fluence ratio is 1 to 18. The magnitude of these ratios is quite remarkable. From the new results, the derived absolute neutron energy distribution is of the correct strength and shape for the albedo neutrons to be the main source of the high-energy protons trapped in the Earth's inner radiation belt. In addition, the results, depending on the extrapolation scheme used, indicate that the neutron dose equivalent rate may be as high as 0.1 mSv/h near the geomagnetic north pole and thus a significant contributor to the radiation exposures of pilots, flight attendants and the general public. (author)

  19. Balance Function in High-Energy Collisions

    International Nuclear Information System (INIS)

    Tawfik, A.; Shalaby, Asmaa G.

    2015-01-01

    Aspects and implications of the balance functions (BF) in high-energy physics are reviewed. The various calculations and measurements depending on different quantities, for example, system size, collisions centrality, and beam energy, are discussed. First, the different definitions including advantages and even short-comings are highlighted. It is found that BF, which are mainly presented in terms of relative rapidity, and relative azimuthal and invariant relative momentum, are sensitive to the interaction centrality but not to the beam energy and can be used in estimating the hadronization time and the hadron-quark phase transition. Furthermore, the quark chemistry can be determined. The chemical evolution of the new-state-of-matter, the quark-gluon plasma, and its temporal-spatial evolution, femtoscopy of two-particle correlations, are accessible. The production time of positive-negative pair of charges can be determined from the widths of BF. Due to the reduction in the diffusion time, narrowed widths refer to delayed hadronization. It is concluded that BF are powerful tools characterizing hadron-quark phase transition and estimating some essential properties

  20. Energy functionals for Calabi-Yau metrics

    International Nuclear Information System (INIS)

    Headrick, M; Nassar, A

    2013-01-01

    We identify a set of ''energy'' functionals on the space of metrics in a given Kähler class on a Calabi-Yau manifold, which are bounded below and minimized uniquely on the Ricci-flat metric in that class. Using these functionals, we recast the problem of numerically solving the Einstein equation as an optimization problem. We apply this strategy, using the ''algebraic'' metrics (metrics for which the Kähler potential is given in terms of a polynomial in the projective coordinates), to the Fermat quartic and to a one-parameter family of quintics that includes the Fermat and conifold quintics. We show that this method yields approximations to the Ricci-flat metric that are exponentially accurate in the degree of the polynomial (except at the conifold point, where the convergence is polynomial), and therefore orders of magnitude more accurate than the balanced metrics, previously studied as approximations to the Ricci-flat metric. The method is relatively fast and easy to implement. On the theoretical side, we also show that the functionals can be used to give a heuristic proof of Yau's theorem

  1. A bioinformatic survey of distribution, conservation, and probable functions of LuxR solo regulators in bacteria

    Science.gov (United States)

    Subramoni, Sujatha; Florez Salcedo, Diana Vanessa; Suarez-Moreno, Zulma R.

    2015-01-01

    LuxR solo transcriptional regulators contain both an autoinducer binding domain (ABD; N-terminal) and a DNA binding Helix-Turn-Helix domain (HTH; C-terminal), but are not associated with a cognate N-acyl homoserine lactone (AHL) synthase coding gene in the same genome. Although a few LuxR solos have been characterized, their distributions as well as their role in bacterial signal perception and other processes are poorly understood. In this study we have carried out a systematic survey of distribution of all ABD containing LuxR transcriptional regulators (QS domain LuxRs) available in the InterPro database (IPR005143), and identified those lacking a cognate AHL synthase. These LuxR solos were then analyzed regarding their taxonomical distribution, predicted functions of neighboring genes and the presence of complete AHL-QS systems in the genomes that carry them. Our analyses reveal the presence of one or multiple predicted LuxR solos in many proteobacterial genomes carrying QS domain LuxRs, some of them harboring genes for one or more AHL-QS circuits. The presence of LuxR solos in bacteria occupying diverse environments suggests potential ecological functions for these proteins beyond AHL and interkingdom signaling. Based on gene context and the conservation levels of invariant amino acids of ABD, we have classified LuxR solos into functionally meaningful groups or putative orthologs. Surprisingly, putative LuxR solos were also found in a few non-proteobacterial genomes which are not known to carry AHL-QS systems. Multiple predicted LuxR solos in the same genome appeared to have different levels of conservation of invariant amino acid residues of ABD questioning their binding to AHLs. In summary, this study provides a detailed overview of distribution of LuxR solos and their probable roles in bacteria with genome sequence information. PMID:25759807

  2. A bioinformatic survey of distribution, conservation, and probable functions of LuxR solo regulators in bacteria.

    Science.gov (United States)

    Subramoni, Sujatha; Florez Salcedo, Diana Vanessa; Suarez-Moreno, Zulma R

    2015-01-01

    LuxR solo transcriptional regulators contain both an autoinducer binding domain (ABD; N-terminal) and a DNA binding Helix-Turn-Helix domain (HTH; C-terminal), but are not associated with a cognate N-acyl homoserine lactone (AHL) synthase coding gene in the same genome. Although a few LuxR solos have been characterized, their distributions as well as their role in bacterial signal perception and other processes are poorly understood. In this study we have carried out a systematic survey of distribution of all ABD containing LuxR transcriptional regulators (QS domain LuxRs) available in the InterPro database (IPR005143), and identified those lacking a cognate AHL synthase. These LuxR solos were then analyzed regarding their taxonomical distribution, predicted functions of neighboring genes and the presence of complete AHL-QS systems in the genomes that carry them. Our analyses reveal the presence of one or multiple predicted LuxR solos in many proteobacterial genomes carrying QS domain LuxRs, some of them harboring genes for one or more AHL-QS circuits. The presence of LuxR solos in bacteria occupying diverse environments suggests potential ecological functions for these proteins beyond AHL and interkingdom signaling. Based on gene context and the conservation levels of invariant amino acids of ABD, we have classified LuxR solos into functionally meaningful groups or putative orthologs. Surprisingly, putative LuxR solos were also found in a few non-proteobacterial genomes which are not known to carry AHL-QS systems. Multiple predicted LuxR solos in the same genome appeared to have different levels of conservation of invariant amino acid residues of ABD questioning their binding to AHLs. In summary, this study provides a detailed overview of distribution of LuxR solos and their probable roles in bacteria with genome sequence information.

  3. A bioinformatic survey of distribution, conservation and probable functions of LuxR solo regulators in bacteria

    Directory of Open Access Journals (Sweden)

    Sujatha eSubramoni

    2015-02-01

    Full Text Available LuxR solo transcriptional regulators contain both an autoinducer binding domain (ABD; N-terminal and a DNA binding Helix-Turn-Helix domain (HTH; C-terminal, but are not associated with a cognate N-acyl homoserine lactone (AHL synthase coding gene in the same genome. Although a few LuxR solos have been characterized, their distributions as well as their role in bacterial signal perception and other processes are poorly understood. In this study we have carried out a systematic survey of distribution of all ABD containing LuxR transcriptional regulators (QS domain LuxRs available in the InterPro database (IPR005143, and identified those lacking a cognate AHL synthase. These LuxR solos were then analyzed regarding their taxonomical distribution, predicted functions of neighboring genes and the presence of complete AHL-QS systems in the genomes that carry them. Our analyses reveal the presence of one or multiple predicted LuxR solos in many proteobacterial genomes carrying QS domain LuxRs, some of them harboring genes for one or more AHL-QS circuits. The presence of LuxR solos in bacteria occupying diverse environments suggests potential ecological functions for these proteins beyond AHL and interkingdom signaling. Based on gene context and the conservation levels of invariant amino acids of ABD, we have classified LuxR solos into functionally meaningful groups or putative orthologs. Surprisingly, putative LuxR solos were also found in a few non-proteobacterial genomes which are not known to carry AHL-QS systems. Multiple predicted LuxR solos in the same genome appeared to have different levels of conservation of invariant amino acid residues of ABD questioning their binding to AHLs. In summary, this study provides a detailed overview of distribution of LuxR solos and their probable roles in bacteria with genome sequence information.

  4. Most probable degree distribution at fixed structural entropy

    Indian Academy of Sciences (India)

    Here we derive the most probable degree distribution emerging ... the structural entropy of power-law networks is an increasing function of the expo- .... tition function Z of the network as the sum over all degree distributions, with given energy.

  5. Impaired mismatch negativity (MMN) generation in schizophrenia as a function of stimulus deviance, probability, and interstimulus/interdeviant interval.

    Science.gov (United States)

    Javitt, D C; Grochowski, S; Shelley, A M; Ritter, W

    1998-03-01

    Schizophrenia is a severe mental disorder associated with disturbances in perception and cognition. Event-related potentials (ERP) provide a mechanism for evaluating potential mechanisms underlying neurophysiological dysfunction in schizophrenia. Mismatch negativity (MMN) is a short-duration auditory cognitive ERP component that indexes operation of the auditory sensory ('echoic') memory system. Prior studies have demonstrated impaired MMN generation in schizophrenia along with deficits in auditory sensory memory performance. MMN is elicited in an auditory oddball paradigm in which a sequence of repetitive standard tones is interrupted infrequently by a physically deviant ('oddball') stimulus. The present study evaluates MMN generation as a function of deviant stimulus probability, interstimulus interval, interdeviant interval and the degree of pitch separation between the standard and deviant stimuli. The major findings of the present study are first, that MMN amplitude is decreased in schizophrenia across a broad range of stimulus conditions, and second, that the degree of deficit in schizophrenia is largest under conditions when MMN is normally largest. The pattern of deficit observed in schizophrenia differs from the pattern observed in other conditions associated with MMN dysfunction, including Alzheimer's disease, stroke, and alcohol intoxication.

  6. On the shapes of the presumed probability density function for the modeling of turbulence-radiation interactions

    International Nuclear Information System (INIS)

    Liu, L.H.; Xu, X.; Chen, Y.L.

    2004-01-01

    The laminar flamelet equations in combination with the joint probability density function (PDF) transport equation of mixture fraction and turbulence frequency have been used to simulate turbulent jet diffusion flames. To check the suitability of the presumed shapes of the PDF for the modeling of turbulence-radiation interactions (TRI), two types of presumed joint PDFs are constructed by using the second-order moments of temperature and the species concentrations, which are derived by the laminar flamelet model. The time-averaged radiative source terms and the time-averaged absorption coefficients are calculated by the presumed joint PDF approaches, and compared with those obtained by the laminar flamelet model. By comparison, it is shown that there are obvious differences between the results of the independent PDF approach and the laminar flamelet model. Generally, the results of the dependent PDF approach agree better with those of the flamelet model. For the modeling of TRI, the dependent PDF approach is superior to the independent PDF approach

  7. Building a Universal Nuclear Energy Density Functional

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, Joe A. [Michigan State Univ., East Lansing, MI (United States); Furnstahl, Dick; Horoi, Mihai; Lust, Rusty; Nazaewicc, Witek; Ng, Esmond; Thompson, Ian; Vary, James

    2012-12-30

    During the period of Dec. 1 2006 – Jun. 30, 2012, the UNEDF collaboration carried out a comprehensive study of all nuclei, based on the most accurate knowledge of the strong nuclear interaction, the most reliable theoretical approaches, the most advanced algorithms, and extensive computational resources, with a view towards scaling to the petaflop platforms and beyond. The long-term vision initiated with UNEDF is to arrive at a comprehensive, quantitative, and unified description of nuclei and their reactions, grounded in the fundamental interactions between the constituent nucleons. We seek to replace current phenomenological models of nuclear structure and reactions with a well-founded microscopic theory that delivers maximum predictive power with well-quantified uncertainties. Specifically, the mission of this project has been three-fold: First, to find an optimal energy density functional (EDF) using all our knowledge of the nucleonic Hamiltonian and basic nuclear properties; Second, to apply the EDF theory and its extensions to validate the functional using all the available relevant nuclear structure and reaction data; Third, to apply the validated theory to properties of interest that cannot be measured, in particular the properties needed for reaction theory.

  8. Generalized Probability-Probability Plots

    NARCIS (Netherlands)

    Mushkudiani, N.A.; Einmahl, J.H.J.

    2004-01-01

    We introduce generalized Probability-Probability (P-P) plots in order to study the one-sample goodness-of-fit problem and the two-sample problem, for real valued data.These plots, that are constructed by indexing with the class of closed intervals, globally preserve the properties of classical P-P

  9. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions

    DEFF Research Database (Denmark)

    Yura, Harold; Hanson, Steen Grüner

    2012-01-01

    with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative...

  10. Ignition Probability

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — USFS, State Forestry, BLM, and DOI fire occurrence point locations from 1987 to 2008 were combined and converted into a fire occurrence probability or density grid...

  11. From free energy to expected energy: Improving energy-based value function approximation in reinforcement learning.

    Science.gov (United States)

    Elfwing, Stefan; Uchibe, Eiji; Doya, Kenji

    2016-12-01

    Free-energy based reinforcement learning (FERL) was proposed for learning in high-dimensional state and action spaces. However, the FERL method does only really work well with binary, or close to binary, state input, where the number of active states is fewer than the number of non-active states. In the FERL method, the value function is approximated by the negative free energy of a restricted Boltzmann machine (RBM). In our earlier study, we demonstrated that the performance and the robustness of the FERL method can be improved by scaling the free energy by a constant that is related to the size of network. In this study, we propose that RBM function approximation can be further improved by approximating the value function by the negative expected energy (EERL), instead of the negative free energy, as well as being able to handle continuous state input. We validate our proposed method by demonstrating that EERL: (1) outperforms FERL, as well as standard neural network and linear function approximation, for three versions of a gridworld task with high-dimensional image state input; (2) achieves new state-of-the-art results in stochastic SZ-Tetris in both model-free and model-based learning settings; and (3) significantly outperforms FERL and standard neural network function approximation for a robot navigation task with raw and noisy RGB images as state input and a large number of actions. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  12. The influence of the electron wave function on the Pt Lsub(I) and Lsub(III) ionization probabilities by 3.6 MeV He impact

    International Nuclear Information System (INIS)

    Ullrich, J.; Dangendorf, V.; Dexheimer, K.; Do, K.; Kelbch, C.; Kelbch, S.; Schadt, W.; Schmidt-Boecking, H.; Stiebing, K.E.; Roesel, F.; Trautmann, D.

    1986-01-01

    For 3.6 MeV He impact the Lsub(I) and Lsub(III) subshell ionization probabilities of Pt have been measured. Due to relativistic effects in the electron wave functions, the Lsub(I) subshell ionization probability Isub(LI)(b) is strong enhanced at small impact parameters exceeding even Isub(LIII)(b) in nice agreement with the SCA theory. (orig.)

  13. Multivariate quantile mapping bias correction: an N-dimensional probability density function transform for climate model simulations of multiple variables

    Science.gov (United States)

    Cannon, Alex J.

    2018-01-01

    Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin

  14. New angles on energy correlation functions

    Science.gov (United States)

    Moult, Ian; Necib, Lina; Thaler, Jesse

    2016-12-01

    Jet substructure observables, designed to identify specific features within jets, play an essential role at the Large Hadron Collider (LHC), both for searching for signals beyond the Standard Model and for testing QCD in extreme phase space regions. In this paper, we systematically study the structure of infrared and collinear safe substructure observables, defining a generalization of the energy correlation functions to probe n-particle correlations within a jet. These generalized correlators provide a flexible basis for constructing new substructure observables optimized for specific purposes. Focusing on three major targets of the jet substructure community — boosted top tagging, boosted W/Z/H tagging, and quark/gluon discrimination — we use power-counting techniques to identify three new series of powerful discriminants: M i , N i , and U i . The M i series is designed for use on groomed jets, providing a novel example of observables with improved discrimination power after the removal of soft radiation. The N i series behave parametrically like the N -subjettiness ratio observables, but are defined without respect to subjet axes, exhibiting improved behavior in the unresolved limit. Finally, the U i series improves quark/gluon discrimination by using higher-point correlators to simultaneously probe multiple emissions within a jet. Taken together, these observables broaden the scope for jet substructure studies at the LHC.

  15. New angles on energy correlation functions

    Energy Technology Data Exchange (ETDEWEB)

    Moult, Ian [Berkeley Center for Theoretical Physics, University of California,Berkeley, CA 94720 (United States); Theoretical Physics Group, Lawrence Berkeley National Laboratory,Berkeley, CA 94720 (United States); Center for Theoretical Physics, Massachusetts Institute of Technology,Cambridge, MA 02139 (United States); Necib, Lina; Thaler, Jesse [Center for Theoretical Physics, Massachusetts Institute of Technology,Cambridge, MA 02139 (United States)

    2016-12-29

    Jet substructure observables, designed to identify specific features within jets, play an essential role at the Large Hadron Collider (LHC), both for searching for signals beyond the Standard Model and for testing QCD in extreme phase space regions. In this paper, we systematically study the structure of infrared and collinear safe substructure observables, defining a generalization of the energy correlation functions to probe n-particle correlations within a jet. These generalized correlators provide a flexible basis for constructing new substructure observables optimized for specific purposes. Focusing on three major targets of the jet substructure community — boosted top tagging, boosted W/Z/H tagging, and quark/gluon discrimination — we use power-counting techniques to identify three new series of powerful discriminants: M{sub i}, N{sub i}, and U{sub i}. The M{sub i} series is designed for use on groomed jets, providing a novel example of observables with improved discrimination power after the removal of soft radiation. The N{sub i} series behave parametrically like the N-subjettiness ratio observables, but are defined without respect to subjet axes, exhibiting improved behavior in the unresolved limit. Finally, the U{sub i} series improves quark/gluon discrimination by using higher-point correlators to simultaneously probe multiple emissions within a jet. Taken together, these observables broaden the scope for jet substructure studies at the LHC.

  16. Probability calculus of fractional order and fractional Taylor's series application to Fokker-Planck equation and information of non-random functions

    International Nuclear Information System (INIS)

    Jumarie, Guy

    2009-01-01

    A probability distribution of fractional (or fractal) order is defined by the measure μ{dx} = p(x)(dx) α , 0 α (D x α h α )f(x) provided by the modified Riemann Liouville definition, one can expand a probability calculus parallel to the standard one. A Fourier's transform of fractional order using the Mittag-Leffler function is introduced, together with its inversion formula; and it provides a suitable generalization of the characteristic function of fractal random variables. It appears that the state moments of fractional order are more especially relevant. The main properties of this fractional probability calculus are outlined, it is shown that it provides a sound approach to Fokker-Planck equation which are fractional in both space and time, and it provides new results in the information theory of non-random functions.

  17. Probability distribution functions of δ15N and δ18O in groundwater nitrate to probabilistically solve complex mixing scenarios

    Science.gov (United States)

    Chrystal, A.; Heikoop, J. M.; Davis, P.; Syme, J.; Hagerty, S.; Perkins, G.; Larson, T. E.; Longmire, P.; Fessenden, J. E.

    2010-12-01

    Elevated nitrate (NO3-) concentrations in drinking water pose a health risk to the public. The dual stable isotopic signatures of δ15N and δ18O in NO3- in surface- and groundwater are often used to identify and distinguish among sources of NO3- (e.g., sewage, fertilizer, atmospheric deposition). In oxic groundwaters where no denitrification is occurring, direct calculations of mixing fractions using a mass balance approach can be performed if three or fewer sources of NO3- are present, and if the stable isotope ratios of the source terms are defined. There are several limitations to this approach. First, direct calculations of mixing fractions are not possible when four or more NO3- sources may be present. Simple mixing calculations also rely upon treating source isotopic compositions as a single value; however these sources themselves exhibit ranges in stable isotope ratios. More information can be gained by using a probabilistic approach to account for the range and distribution of stable isotope ratios in each source. Fitting probability density functions (PDFs) to the isotopic compositions for each source term reveals that some values within a given isotopic range are more likely to occur than others. We compiled a data set of dual isotopes in NO3- sources by combining our measurements with data collected through extensive literature review. We fit each source term with a PDF, and show a new method to probabilistically solve multiple component mixing scenarios with source isotopic composition uncertainty. This method is based on a modified use of a tri-linear diagram. First, source term PDFs are sampled numerous times using a variation of stratified random sampling, Latin Hypercube Sampling. For each set of sampled source isotopic compositions, a reference point is generated close to the measured groundwater sample isotopic composition. This point is used as a vertex to form all possible triangles between all pairs of sampled source isotopic compositions

  18. Energy transfer upon collision of selectively excited CO{sub 2} molecules: State-to-state cross sections and probabilities for modeling of atmospheres and gaseous flows

    Energy Technology Data Exchange (ETDEWEB)

    Lombardi, A., E-mail: ebiu2005@gmail.com; Faginas-Lago, N.; Pacifici, L.; Grossi, G. [Dipartimento di Chimica, Università di Perugia, via Elce di Sotto 8, 06123 Perugia (Italy)

    2015-07-21

    Carbon dioxide molecules can store and release tens of kcal/mol upon collisions, and such an energy transfer strongly influences the energy disposal and the chemical processes in gases under the extreme conditions typical of plasmas and hypersonic flows. Moreover, the energy transfer involving CO{sub 2} characterizes the global dynamics of the Earth-atmosphere system and the energy balance of other planetary atmospheres. Contemporary developments in kinetic modeling of gaseous mixtures are connected to progress in the description of the energy transfer, and, in particular, the attempts to include non-equilibrium effects require to consider state-specific energy exchanges. A systematic study of the state-to-state vibrational energy transfer in CO{sub 2} + CO{sub 2} collisions is the focus of the present work, aided by a theoretical and computational tool based on quasiclassical trajectory simulations and an accurate full-dimension model of the intermolecular interactions. In this model, the accuracy of the description of the intermolecular forces (that determine the probability of energy transfer in molecular collisions) is enhanced by explicit account of the specific effects of the distortion of the CO{sub 2} structure due to vibrations. Results show that these effects are important for the energy transfer probabilities. Moreover, the role of rotational and vibrational degrees of freedom is found to be dominant in the energy exchange, while the average contribution of translations, under the temperature and energy conditions considered, is negligible. Remarkable is the fact that the intramolecular energy transfer only involves stretching and bending, unless one of the colliding molecules has an initial symmetric stretching quantum number greater than a threshold value estimated to be equal to 7.

  19. Energy transfer upon collision of selectively excited CO2 molecules: State-to-state cross sections and probabilities for modeling of atmospheres and gaseous flows.

    Science.gov (United States)

    Lombardi, A; Faginas-Lago, N; Pacifici, L; Grossi, G

    2015-07-21

    Carbon dioxide molecules can store and release tens of kcal/mol upon collisions, and such an energy transfer strongly influences the energy disposal and the chemical processes in gases under the extreme conditions typical of plasmas and hypersonic flows. Moreover, the energy transfer involving CO2 characterizes the global dynamics of the Earth-atmosphere system and the energy balance of other planetary atmospheres. Contemporary developments in kinetic modeling of gaseous mixtures are connected to progress in the description of the energy transfer, and, in particular, the attempts to include non-equilibrium effects require to consider state-specific energy exchanges. A systematic study of the state-to-state vibrational energy transfer in CO2 + CO2 collisions is the focus of the present work, aided by a theoretical and computational tool based on quasiclassical trajectory simulations and an accurate full-dimension model of the intermolecular interactions. In this model, the accuracy of the description of the intermolecular forces (that determine the probability of energy transfer in molecular collisions) is enhanced by explicit account of the specific effects of the distortion of the CO2 structure due to vibrations. Results show that these effects are important for the energy transfer probabilities. Moreover, the role of rotational and vibrational degrees of freedom is found to be dominant in the energy exchange, while the average contribution of translations, under the temperature and energy conditions considered, is negligible. Remarkable is the fact that the intramolecular energy transfer only involves stretching and bending, unless one of the colliding molecules has an initial symmetric stretching quantum number greater than a threshold value estimated to be equal to 7.

  20. Probability of satellite collision

    Science.gov (United States)

    Mccarter, J. W.

    1972-01-01

    A method is presented for computing the probability of a collision between a particular artificial earth satellite and any one of the total population of earth satellites. The collision hazard incurred by the proposed modular Space Station is assessed using the technique presented. The results of a parametric study to determine what type of satellite orbits produce the greatest contribution to the total collision probability are presented. Collision probability for the Space Station is given as a function of Space Station altitude and inclination. Collision probability was also parameterized over miss distance and mission duration.

  1. Real analysis and probability

    CERN Document Server

    Ash, Robert B; Lukacs, E

    1972-01-01

    Real Analysis and Probability provides the background in real analysis needed for the study of probability. Topics covered range from measure and integration theory to functional analysis and basic concepts of probability. The interplay between measure theory and topology is also discussed, along with conditional probability and expectation, the central limit theorem, and strong laws of large numbers with respect to martingale theory.Comprised of eight chapters, this volume begins with an overview of the basic concepts of the theory of measure and integration, followed by a presentation of var

  2. Cross-Sectional Relationships of Physical Activity and Sedentary Behavior With Cognitive Function in Older Adults With Probable Mild Cognitive Impairment.

    Science.gov (United States)

    Falck, Ryan S; Landry, Glenn J; Best, John R; Davis, Jennifer C; Chiu, Bryan K; Liu-Ambrose, Teresa

    2017-10-01

    Mild cognitive impairment (MCI) represents a transition between normal cognitive aging and dementia and may represent a critical time frame for promoting cognitive health through behavioral strategies. Current evidence suggests that physical activity (PA) and sedentary behavior are important for cognition. However, it is unclear whether there are differences in PA and sedentary behavior between people with probable MCI and people without MCI or whether the relationships of PA and sedentary behavior with cognitive function differ by MCI status. The aims of this study were to examine differences in PA and sedentary behavior between people with probable MCI and people without MCI and whether associations of PA and sedentary behavior with cognitive function differed by MCI status. This was a cross-sectional study. Physical activity and sedentary behavior in adults dwelling in the community (N = 151; at least 55 years old) were measured using a wrist-worn actigraphy unit. The Montreal Cognitive Assessment was used to categorize participants with probable MCI (scores of Cognitive function was indexed using the Alzheimer Disease Assessment Scale-Cognitive-Plus (ADAS-Cog Plus). Physical activity and sedentary behavior were compared based on probable MCI status, and relationships of ADAS-Cog Plus with PA and sedentary behavior were examined by probable MCI status. Participants with probable MCI (n = 82) had lower PA and higher sedentary behavior than participants without MCI (n = 69). Higher PA and lower sedentary behavior were associated with better ADAS-Cog Plus performance in participants without MCI (β = -.022 and β = .012, respectively) but not in participants with probable MCI (β cognitive function. The diagnosis of MCI was not confirmed with a physician; therefore, this study could not conclude how many of the participants categorized as having probable MCI would actually have been diagnosed with MCI by a physician. Participants with probable MCI were less active

  3. A Cellular Perspective on Brain Energy Metabolism and Functional Imaging

    KAUST Repository

    Magistretti, Pierre J.; Allaman, Igor

    2015-01-01

    The energy demands of the brain are high: they account for at least 20% of the body's energy consumption. Evolutionary studies indicate that the emergence of higher cognitive functions in humans is associated with an increased glucose utilization

  4. Probability tales

    CERN Document Server

    Grinstead, Charles M; Snell, J Laurie

    2011-01-01

    This book explores four real-world topics through the lens of probability theory. It can be used to supplement a standard text in probability or statistics. Most elementary textbooks present the basic theory and then illustrate the ideas with some neatly packaged examples. Here the authors assume that the reader has seen, or is learning, the basic theory from another book and concentrate in some depth on the following topics: streaks, the stock market, lotteries, and fingerprints. This extended format allows the authors to present multiple approaches to problems and to pursue promising side discussions in ways that would not be possible in a book constrained to cover a fixed set of topics. To keep the main narrative accessible, the authors have placed the more technical mathematical details in appendices. The appendices can be understood by someone who has taken one or two semesters of calculus.

  5. Probability theory

    CERN Document Server

    Dorogovtsev, A Ya; Skorokhod, A V; Silvestrov, D S; Skorokhod, A V

    1997-01-01

    This book of problems is intended for students in pure and applied mathematics. There are problems in traditional areas of probability theory and problems in the theory of stochastic processes, which has wide applications in the theory of automatic control, queuing and reliability theories, and in many other modern science and engineering fields. Answers to most of the problems are given, and the book provides hints and solutions for more complicated problems.

  6. Foundations of probability

    International Nuclear Information System (INIS)

    Fraassen, B.C. van

    1979-01-01

    The interpretation of probabilities in physical theories are considered, whether quantum or classical. The following points are discussed 1) the functions P(μ, Q) in terms of which states and propositions can be represented, are classical (Kolmogoroff) probabilities, formally speaking, 2) these probabilities are generally interpreted as themselves conditional, and the conditions are mutually incompatible where the observables are maximal and 3) testing of the theory typically takes the form of confronting the expectation values of observable Q calculated with probability measures P(μ, Q) for states μ; hence, of comparing the probabilities P(μ, Q)(E) with the frequencies of occurrence of the corresponding events. It seems that even the interpretation of quantum mechanics, in so far as it concerns what the theory says about the empirical (i.e. actual, observable) phenomena, deals with the confrontation of classical probability measures with observable frequencies. This confrontation is studied. (Auth./C.F.)

  7. Enhancement of biodiversity in energy farming: towards a functional approach

    International Nuclear Information System (INIS)

    Londo, M.; Dekker, J.

    1997-01-01

    When biomass is a substantial sustainable energy source, and special energy crops are grown on a large scale, land use and the environment of agriculture will be affected. Of these effects, biodiversity deserves special attention. The enhancement of biodiversity in energy farming via standard setting is the overall purpose of this project. In this study, the potential functionality of biodiversity in energy farming is proposed as a way of operationalising the rather abstract and broad concept of biodiversity. Functions of biodiversity are reviewed, and examples of functions are worked out, based on the current literature of nature in energy farming systems. (author)

  8. Functionally graded biomimetic energy absorption concept development for transportation systems.

    Science.gov (United States)

    2014-02-01

    Mechanics of a functionally graded cylinder subject to static or dynamic axial loading is considered, including a potential application as energy absorber. The mass density and stiffness are power functions of the radial coordinate as may be the case...

  9. Application of tests of goodness of fit in determining the probability density function for spacing of steel sets in tunnel support system

    Directory of Open Access Journals (Sweden)

    Farnoosh Basaligheh

    2015-12-01

    Full Text Available One of the conventional methods for temporary support of tunnels is to use steel sets with shotcrete. The nature of a temporary support system demands a quick installation of its structures. As a result, the spacing between steel sets is not a fixed amount and it can be considered as a random variable. Hence, in the reliability analysis of these types of structures, the selection of an appropriate probability distribution function of spacing of steel sets is essential. In the present paper, the distances between steel sets are collected from an under-construction tunnel and the collected data is used to suggest a proper Probability Distribution Function (PDF for the spacing of steel sets. The tunnel has two different excavation sections. In this regard, different distribution functions were investigated and three common tests of goodness of fit were used for evaluation of each function for each excavation section. Results from all three methods indicate that the Wakeby distribution function can be suggested as the proper PDF for spacing between the steel sets. It is also noted that, although the probability distribution function for two different tunnel sections is the same, the parameters of PDF for the individual sections are different from each other.

  10. Probability for statisticians

    CERN Document Server

    Shorack, Galen R

    2017-01-01

    This 2nd edition textbook offers a rigorous introduction to measure theoretic probability with particular attention to topics of interest to mathematical statisticians—a textbook for courses in probability for students in mathematical statistics. It is recommended to anyone interested in the probability underlying modern statistics, providing a solid grounding in the probabilistic tools and techniques necessary to do theoretical research in statistics. For the teaching of probability theory to post graduate statistics students, this is one of the most attractive books available. Of particular interest is a presentation of the major central limit theorems via Stein's method either prior to or alternative to a characteristic function presentation. Additionally, there is considerable emphasis placed on the quantile function as well as the distribution function. The bootstrap and trimming are both presented. Martingale coverage includes coverage of censored data martingales. The text includes measure theoretic...

  11. Task 4.1: Development of a framework for creating a databank to generate probability density functions for process parameters

    International Nuclear Information System (INIS)

    Burgazzi, Luciano

    2011-01-01

    PSA analysis should be based on the best available data for the types of equipment and systems in the plant. In some cases very limited data may be available for evolutionary designs or new equipments, especially in the case of passive systems. It has been recognized that difficulties arise in addressing the uncertainties related to the physical phenomena and characterizing the parameters relevant to the passive system performance evaluation, since the unavailability of a consistent operational and experimental data base. This lack of experimental evidence and validated data forces the analyst to resort to expert/engineering judgment to a large extent, thus making the results strongly dependent upon the expert elicitation process. This prompts the need for the development of a framework for constructing a database to generate probability distributions for the parameters influencing the system behaviour. The objective of the task is to develop a consistent framework aimed at creating probability distributions for the parameters relevant to the passive system performance evaluation. In order to achieve this goal considerable experience and engineering judgement are also required to determine which existing data are most applicable to the new systems or which generic data bases or models provide the best information for the system design. Eventually in case of absence of documented specific reliability data, documented expert judgement coming out from a well structured procedure could be used to envisage sound probability distributions for the parameters under interest

  12. Kramers-Kronig transform for the surface energy loss function

    International Nuclear Information System (INIS)

    Tan, G.L.; DeNoyer, L.K.; French, R.H.; Guittet, M.J.; Gautier-Soyer, M.

    2005-01-01

    A new pair of Kramers-Kronig (KK) dispersion relationships for the transformation of surface energy loss function Im[-1/(ε + 1)] has been proposed. The validity of the new surface KK transform is confirmed, using both a Lorentz oscillator model and the surface energy loss functions determined from the experimental complex dielectric function of SrTiO 3 and tungsten metal. The interband transition strength spectra (J cv ) have been derived either directly from the original complex dielectric function or from the derived dielectric function obtained from the KK transform of the surface energy loss function. The original J cv trace and post-J cv trace overlapped together for the three modes, indicating that the new surface Kramers-Kronig dispersion relationship is valid for the surface energy loss function

  13. Liver Function Status in some Nigerian Children with Protein Energy ...

    African Journals Online (AJOL)

    Objective: To ascertain functional status of the liver in Nigeria Children with Protein energy malnutrition. Materials and Methods: Liver function tests were performed on a total of 88 children with protein energy malnutrition (PEM). These were compared with 22 apparently well-nourished children who served as controls.

  14. Energy expressions in density-functional theory using line integrals.

    NARCIS (Netherlands)

    van Leeuwen, R.; Baerends, E.J.

    1995-01-01

    In this paper we will address the question of how to obtain energies from functionals when only the functional derivative is given. It is shown that one can obtain explicit expressions for the exchange-correlation energy from approximate exchange-correlation potentials using line integrals along

  15. Introduction and application of non-stationary standardized precipitation index considering probability distribution function and return period

    Science.gov (United States)

    Park, Junehyeong; Sung, Jang Hyun; Lim, Yoon-Jin; Kang, Hyun-Suk

    2018-05-01

    The widely used meteorological drought index, the Standardized Precipitation Index (SPI), basically assumes stationarity, but recent changes in the climate have led to a need to review this hypothesis. In this study, a new non-stationary SPI that considers not only the modified probability distribution parameter but also the return period under the non-stationary process was proposed. The results were evaluated for two severe drought cases during the last 10 years in South Korea. As a result, SPIs considered that the non-stationary hypothesis underestimated the drought severity than the stationary SPI despite that these past two droughts were recognized as significantly severe droughts. It may be caused by that the variances of summer and autumn precipitation become larger over time then it can make the probability distribution wider than before. This implies that drought expressions by statistical index such as SPI can be distorted by stationary assumption and cautious approach is needed when deciding drought level considering climate changes.

  16. Allowable CO2 concentrations under the United Nations Framework Convention on Climate Change as a function of the climate sensitivity probability distribution function

    International Nuclear Information System (INIS)

    Harvey, L D Danny

    2007-01-01

    Article 2 of the United Nations Framework Convention on Climate Change (UNFCCC) calls for stabilization of greenhouse gas (GHG) concentrations at levels that prevent dangerous anthropogenic interference (DAI) in the climate system. Until recently, the consensus viewpoint was that the climate sensitivity (the global mean equilibrium warming for a doubling of atmospheric CO 2 concentration) was 'likely' to fall between 1.5 and 4.5 K. However, a number of recent studies have generated probability distribution functions (pdfs) for climate sensitivity with the 95th percentile of the expected climate sensitivity as large as 10 K, while some studies suggest that the climate sensitivity is likely to fall in the lower half of the long-standing 1.5-4.5 K range. This paper examines the allowable CO 2 concentration as a function of the 95th percentile of the climate sensitivity pdf (ranging from 2 to 8 K) and for the following additional assumptions: (i) the 50th percentile for the pdf of the minimum sustained global mean warming that causes unacceptable harm equal to 1.5 or 2.5 K; and (ii) 1%, 5% or 10% allowable risks of unacceptable harm. For a 1% risk tolerance and the more stringent harm-threshold pdf, the allowable CO 2 concentration ranges from 323 to 268 ppmv as the 95th percentile of the climate sensitivity pdf increases from 2 to 8 K, while for a 10% risk tolerance and the less stringent harm-threshold pdf, the allowable CO 2 concentration ranges from 531 to 305 ppmv. In both cases it is assumed that non-CO 2 GHG radiative forcing can be reduced to half of its present value, otherwise; the allowable CO 2 concentration is even smaller. Accounting for the fact that the CO 2 concentration will gradually fall if emissions are reduced to zero, and that peak realized warming will then be less than the peak equilibrium warming (related to peak radiative forcing) allows the CO 2 concentration to peak at 10-40 ppmv higher than the limiting values given above for a climate

  17. Analytical potential energy function for the Br + H2 system

    International Nuclear Information System (INIS)

    Kurosaki, Yuzuru

    2001-01-01

    Analytical functions with a many-body expansion for the ground and first-excited-state potential energy surfaces for the Br+H 2 system are newly presented in this work. These functions describe the abstraction and exchange reactions qualitatively well, although it has been found that the function for the ground-state potential surface is still quantitatively unsatisfactory. (author)

  18. Structure and potential energy function for Pu22+ ion

    International Nuclear Information System (INIS)

    Li Quan; Huang Hui; Li Daohua

    2003-01-01

    The theoretical study on Pu 2 2+ using density functional method shows that the molecular ion is metastable. Ground electronic state is 13 Σ g for Pu 2 2+ , the analytic potential energy function is in well agreement with the Z-W function, and the force constants and spectroscopic data have been worked out for the first time

  19. Green close-quote s function method with energy-independent vertex functions

    International Nuclear Information System (INIS)

    Tsay Tzeng, S.Y.; Kuo, T.T.; Tzeng, Y.; Geyer, H.B.; Navratil, P.

    1996-01-01

    In conventional Green close-quote s function methods the vertex function Γ is generally energy dependent. However, a model-space Green close-quote s function method where the vertex function is manifestly energy independent can be formulated using energy-independent effective interaction theories based on folded diagrams and/or similarity transformations. This is discussed in general and then illustrated for a 1p1h model-space Green close-quote s function applied to a solvable Lipkin many-fermion model. The poles of the conventional Green close-quote s function are obtained by solving a self-consistent Dyson equation and model space calculations may lead to unphysical poles. For the energy-independent model-space Green close-quote s function only the physical poles of the model problem are reproduced and are in satisfactory agreement with the exact excitation energies. copyright 1996 The American Physical Society

  20. Probability distribution function values in mobile phones;Valores de funciones de distribución de probabilidad en el teléfono móvil

    Directory of Open Access Journals (Sweden)

    Luis Vicente Chamorro Marcillllo

    2013-06-01

    Full Text Available Engineering, within its academic and application forms, as well as any formal research work requires the use of statistics and every inferential statistical analysis requires the use of values of probability distribution functions that are generally available in tables. Generally, management of those tables have physical problems (wasteful transport and wasteful consultation and operational (incomplete lists and limited accuracy. The study, “Probability distribution function values in mobile phones”, permitted determining – through a needs survey applied to students involved in statistics studies at Universidad de Nariño – that the best known and most used values correspond to Chi-Square, Binomial, Student’s t, and Standard Normal distributions. Similarly, it showed user’s interest in having the values in question within an alternative means to correct, at least in part, the problems presented by “the famous tables”. To try to contribute to the solution, we built software that allows immediately and dynamically obtaining the values of the probability distribution functions most commonly used by mobile phones.

  1. Accurate potential energy curves, spectroscopic parameters, transition dipole moments, and transition probabilities of 21 low-lying states of the CO+ cation

    Science.gov (United States)

    Xing, Wei; Shi, Deheng; Zhang, Jicai; Sun, Jinfeng; Zhu, Zunlue

    2018-05-01

    This paper calculates the potential energy curves of 21 Λ-S and 42 Ω states, which arise from the first two dissociation asymptotes of the CO+ cation. The calculations are conducted using the complete active space self-consistent field method, which is followed by the valence internally contracted multireference configuration interaction approach with the Davidson correction. To improve the reliability and accuracy of the potential energy curves, core-valence correlation and scalar relativistic corrections, as well as the extrapolation of potential energies to the complete basis set limit are taken into account. The spectroscopic parameters and vibrational levels are determined. The spin-orbit coupling effect on the spectroscopic parameters and vibrational levels is evaluated. To better study the transition probabilities, the transition dipole moments are computed. The Franck-Condon factors and Einstein coefficients of some emissions are calculated. The radiative lifetimes are determined for a number of vibrational levels of several states. The transitions between different Λ-S states are evaluated. Spectroscopic routines for observing these states are proposed. The spectroscopic parameters, vibrational levels, transition dipole moments, and transition probabilities reported in this paper can be considered to be very reliable and can be used as guidelines for detecting these states in an appropriate spectroscopy experiment, especially for the states that were very difficult to observe or were not detected in previous experiments.

  2. Dynamic SEP event probability forecasts

    Science.gov (United States)

    Kahler, S. W.; Ling, A.

    2015-10-01

    The forecasting of solar energetic particle (SEP) event probabilities at Earth has been based primarily on the estimates of magnetic free energy in active regions and on the observations of peak fluxes and fluences of large (≥ M2) solar X-ray flares. These forecasts are typically issued for the next 24 h or with no definite expiration time, which can be deficient for time-critical operations when no SEP event appears following a large X-ray flare. It is therefore important to decrease the event probability forecast with time as a SEP event fails to appear. We use the NOAA listing of major (≥10 pfu) SEP events from 1976 to 2014 to plot the delay times from X-ray peaks to SEP threshold onsets as a function of solar source longitude. An algorithm is derived to decrease the SEP event probabilities with time when no event is observed to reach the 10 pfu threshold. In addition, we use known SEP event size distributions to modify probability forecasts when SEP intensity increases occur below the 10 pfu event threshold. An algorithm to provide a dynamic SEP event forecast, Pd, for both situations of SEP intensities following a large flare is derived.

  3. Measuring structure functions at SSC energies

    International Nuclear Information System (INIS)

    Morfin, J.G.; Owens, J.F.

    1985-01-01

    Topics discussed include measuring Λ, tests of QCD using hard scattering processes, and measuring parton distributions. In each case, any opportunities and advantages afforded by the unique features of the SSC are emphasized. The working group on structure functions was charged with investigating two specific questions: (1) How well are the various parton distributions known in the kinematic region relevant to calculations for the SSC. (2) What new information can be learned about parton distributions at the SSC. Especially for this working group, the advantages of having a fixed-target facility at the SSC for the measurement of the parton distributions with multi-TeV leptons, were to be examined. 15 references

  4. Probabilities and energies to obtain the counting efficiency of electron-capture nuclides. KLMN model; Probabilidades y energias de reestructuracion atomica subsiguientes a la captura electronica. Modelo KLMN

    Energy Technology Data Exchange (ETDEWEB)

    Galiano, G.; Grau, A.

    1994-07-01

    An intelligent computer program has been developed to obtain the mathematical formulae to compute the probabilities and reduced energies of the different atomic rearrangement pathways following electron-capture decay. Creation and annihilation operators for Auger and X processes have been introduced. Taking into account the symmetries associated with each process, 262 different pathways were obtained. This model allows us to obtain the influence of the M-electro capture in the counting efficiency when the atomic number of the nuclide is high. (Author)

  5. Zeta-function approach to Casimir energy with singular potentials

    International Nuclear Information System (INIS)

    Khusnutdinov, Nail R.

    2006-01-01

    In the framework of zeta-function approach the Casimir energy for three simple model system: single delta potential, step function potential and three delta potentials are analyzed. It is shown that the energy contains contributions which are peculiar to the potentials. It is suggested to renormalize the energy using the condition that the energy of infinitely separated potentials is zero which corresponds to subtraction all terms of asymptotic expansion of zeta-function. The energy obtained in this way obeys all physically reasonable conditions. It is finite in the Dirichlet limit, and it may be attractive or repulsive depending on the strength of potential. The effective action is calculated, and it is shown that the surface contribution appears. The renormalization of the effective action is discussed

  6. Probability-Weighted LMP and RCP for Day-Ahead Energy Markets using Stochastic Security-Constrained Unit Commitment: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Ela, E.; O' Malley, M.

    2012-06-01

    Variable renewable generation resources are increasing their penetration on electric power grids. These resources have weather-driven fuel sources that vary on different time scales and are difficult to predict in advance. These characteristics create challenges for system operators managing the load balance on different timescales. Research is looking into new operational techniques and strategies that show great promise on facilitating greater integration of variable resources. Stochastic Security-Constrained Unit Commitment models are one strategy that has been discussed in literature and shows great benefit. However, it is rarely used outside the research community due to its computational limits and difficulties integrating with electricity markets. This paper discusses how it can be integrated into day-ahead energy markets and especially on what pricing schemes should be used to ensure an efficient and fair market.

  7. Determination of diffuseness parameter to estimate the survival probability of projectile using Woods-Saxon formula at intermediate beam energies

    International Nuclear Information System (INIS)

    Kumar, Rajiv; Goyal, Monika; Roshni; Singh, Pradeep; Kharab, Rajesh

    2017-01-01

    In present work, the S-matrix has been evaluated by using simple Woods-Saxon formula as well as the realistic expression for a number of projectiles varying from 26N e to 76 Ge at intermediate incident beam energies ranging from 30 MeV/A to 300 MeV/A. The target is 197 Au in each and every case. The realistic S-matrix is compared with that of obtained by using the simple Woods-Saxon formula. The motive of this comparison is to fix the value of otherwise free Δ so that the much involved evaluation of realistic S-matrix can be replaced by the simple Woods-Saxon formula

  8. A Cellular Perspective on Brain Energy Metabolism and Functional Imaging

    KAUST Repository

    Magistretti, Pierre J.

    2015-05-01

    The energy demands of the brain are high: they account for at least 20% of the body\\'s energy consumption. Evolutionary studies indicate that the emergence of higher cognitive functions in humans is associated with an increased glucose utilization and expression of energy metabolism genes. Functional brain imaging techniques such as fMRI and PET, which are widely used in human neuroscience studies, detect signals that monitor energy delivery and use in register with neuronal activity. Recent technological advances in metabolic studies with cellular resolution have afforded decisive insights into the understanding of the cellular and molecular bases of the coupling between neuronal activity and energy metabolism and pointat a key role of neuron-astrocyte metabolic interactions. This article reviews some of the most salient features emerging from recent studies and aims at providing an integration of brain energy metabolism across resolution scales. © 2015 Elsevier Inc.

  9. An enviro-economic function for assessing energy resources for district energy systems

    International Nuclear Information System (INIS)

    Rezaie, Behnaz; Reddy, Bale V.; Rosen, Marc A.

    2014-01-01

    District energy (DE) systems provide an important means of mitigating greenhouse gas emissions and the significant related concerns associated with global climate change. DE systems can use fossil fuels, renewable energy and waste heat as energy sources, and facilitate intelligent integration of energy systems. In this study, an enviro-economic function is developed for assessing various energy sources for a district energy system. The DE system is assessed for the considered energy resources by considering two main factors: CO 2 emissions and economics. Using renewable energy resources and associated technologies as the energy suppliers for a DE system yields environmental benefits which can lead to financial advantages through such instruments as tax breaks; while fossil fuels are increasingly penalized by a carbon tax. Considering these factors as well as the financial value of the technology, an analysis approach is developed for energy suppliers of the DE system. In addition, the proposed approach is modified for the case when thermal energy storage is integrated into a DE system. - Highlights: • Developed a function to assess various energy sources for a district energy system. • Considered CO 2 emissions and economics as two main factors. • Applied renewable energy resources technologies as the suppliers for a DE system. • Yields environmental benefits can lead to financial benefits by tax breaks. • Modified enviro-economic function for the TES integrated into a DE system

  10. Kinetic-energy density functional: Atoms and shell structure

    International Nuclear Information System (INIS)

    Garcia-Gonzalez, P.; Alvarellos, J.E.; Chacon, E.

    1996-01-01

    We present a nonlocal kinetic-energy functional which includes an anisotropic average of the density through a symmetrization procedure. This functional allows a better description of the nonlocal effects of the electron system. The main consequence of the symmetrization is the appearance of a clear shell structure in the atomic density profiles, obtained after the minimization of the total energy. Although previous results with some of the nonlocal kinetic functionals have given incipient structures for heavy atoms, only our functional shows a clear shell structure for most of the atoms. The atomic total energies have a good agreement with the exact calculations. Discussion of the chemical potential and the first ionization potential in atoms is included. The functional is also extended to spin-polarized systems. copyright 1996 The American Physical Society

  11. Surface energy and work function of elemental metals

    DEFF Research Database (Denmark)

    Skriver, Hans Lomholt; Rosengaard, N. M.

    1992-01-01

    and noble metals, as derived from the surface tension of liquid metals. In addition, they give work functions which agree with the limited experimental data obtained from single crystals to within 15%, and explain the smooth behavior of the experimental work functions of polycrystalline samples......We have performed an ab initio study of the surface energy and the work function for six close-packed surfaces of 40 elemental metals by means of a Green’s-function technique, based on the linear-muffin-tin-orbitals method within the tight-binding and atomic-sphere approximations. The results...... are in excellent agreement with a recent full-potential, all-electron, slab-supercell calculation of surface energies and work functions for the 4d metals. The present calculations explain the trend exhibited by the surface energies of the alkali, alkaline earth, divalent rare-earth, 3d, 4d, and 5d transition...

  12. Approach to kinetic energy density functionals: Nonlocal terms with the structure of the von Weizsaecker functional

    International Nuclear Information System (INIS)

    Garcia-Aldea, David; Alvarellos, J. E.

    2008-01-01

    We propose a kinetic energy density functional scheme with nonlocal terms based on the von Weizsaecker functional, instead of the more traditional approach where the nonlocal terms have the structure of the Thomas-Fermi functional. The proposed functionals recover the exact kinetic energy and reproduce the linear response function of homogeneous electron systems. In order to assess their quality, we have tested the total kinetic energies as well as the kinetic energy density for atoms. The results show that these nonlocal functionals give as good results as the most sophisticated functionals in the literature. The proposed scheme for constructing the functionals means a step ahead in the field of fully nonlocal kinetic energy functionals, because they are capable of giving better local behavior than the semilocal functionals, yielding at the same time accurate results for total kinetic energies. Moreover, the functionals enjoy the possibility of being evaluated as a single integral in momentum space if an adequate reference density is defined, and then quasilinear scaling for the computational cost can be achieved

  13. Probability and stochastic modeling

    CERN Document Server

    Rotar, Vladimir I

    2012-01-01

    Basic NotionsSample Space and EventsProbabilitiesCounting TechniquesIndependence and Conditional ProbabilityIndependenceConditioningThe Borel-Cantelli TheoremDiscrete Random VariablesRandom Variables and VectorsExpected ValueVariance and Other Moments. Inequalities for DeviationsSome Basic DistributionsConvergence of Random Variables. The Law of Large NumbersConditional ExpectationGenerating Functions. Branching Processes. Random Walk RevisitedBranching Processes Generating Functions Branching Processes Revisited More on Random WalkMarkov ChainsDefinitions and Examples. Probability Distributions of Markov ChainsThe First Step Analysis. Passage TimesVariables Defined on a Markov ChainErgodicity and Stationary DistributionsA Classification of States and ErgodicityContinuous Random VariablesContinuous DistributionsSome Basic Distributions Continuous Multivariate Distributions Sums of Independent Random Variables Conditional Distributions and ExpectationsDistributions in the General Case. SimulationDistribution F...

  14. Dependence of the giant dipole strength function on excitation energy

    International Nuclear Information System (INIS)

    Draper, J.E.; Newton, J.O.; Sobotka, L.G.; Lindenberger, H.; Wozniak, G.J.; Moretto, L.G.; Stephens, F.S.; Diamond, R.M.; McDonald, R.J.

    1982-01-01

    Spectra of γ rays associated with deep-inelastic products from the 1150-MeV 136 Xe+ 181 Ta reaction have been measured. The yield of 10--20-MeV γ rays initially increases rapidly with the excitation energy of the products and then more slowly for excitation energies in excess of 120 MeV. Statistical-model calculations with ground-state values of the giant dipole strength function fail to reproduce the shape of the measured γ-ray spectra. This suggests a dependence of the giant dipole strength function on excitation energy

  15. A new probability density function for spatial distribution of soil water storage capacity leads to SCS curve number method

    OpenAIRE

    Wang, Dingbao

    2018-01-01

    Following the Budyko framework, soil wetting ratio (the ratio between soil wetting and precipitation) as a function of soil storage index (the ratio between soil wetting capacity and precipitation) is derived from the SCS-CN method and the VIC type of model. For the SCS-CN method, soil wetting ratio approaches one when soil storage index approaches infinity, due to the limitation of the SCS-CN method in which the initial soil moisture condition is not explicitly represented. However, for the ...

  16. Construction of energy loss function for low-energy electrons in helium

    Energy Technology Data Exchange (ETDEWEB)

    Dayashankar, [Bhabha Atomic Research Centre, Bombay (India). Div. of Radiation Protection

    1976-02-01

    The energy loss function for electrons in the energy range from 50 eV to 1 keV in helium gas has been constructed by considering separately the energy loss in overcoming the ionization threshold, the loss manifested as kinetic energy of secondary electrons and the loss in the discrete state excitations. This has been done by utilizing recent measurements of Opal et al. on the energy spectrum of secondary electrons and incorporating the experimental data on cross sections for twenty-four excited states. The present results of the energy loss function are in good agreement with the Bethe formula for energies above 500 eV. For lower energies, where the Bethe formula is not applicable, the present results should be particularly useful.

  17. On the low-energy behavior of the Adler function

    International Nuclear Information System (INIS)

    Nesterenko, A.V.

    2009-01-01

    The infrared behavior of the Adler function is examined by making use of a recently derived integral representation for the latter. The obtained result for the Adler function agrees with its experimental prediction in the entire energy range. The inclusive τ lepton decay is studied in the framework of the developed approach

  18. On approximation and energy estimates for delta 6-convex functions.

    Science.gov (United States)

    Saleem, Muhammad Shoaib; Pečarić, Josip; Rehman, Nasir; Khan, Muhammad Wahab; Zahoor, Muhammad Sajid

    2018-01-01

    The smooth approximation and weighted energy estimates for delta 6-convex functions are derived in this research. Moreover, we conclude that if 6-convex functions are closed in uniform norm, then their third derivatives are closed in weighted [Formula: see text]-norm.

  19. On approximation and energy estimates for delta 6-convex functions

    Directory of Open Access Journals (Sweden)

    Muhammad Shoaib Saleem

    2018-02-01

    Full Text Available Abstract The smooth approximation and weighted energy estimates for delta 6-convex functions are derived in this research. Moreover, we conclude that if 6-convex functions are closed in uniform norm, then their third derivatives are closed in weighted L2 $L^{2}$-norm.

  20. COULN, a program for evaluating negative energy Coulomb functions

    International Nuclear Information System (INIS)

    Noble, C.J.; Thompson, I.J.

    1984-01-01

    Program COULN calculates exponentially decaying Whittaker functions, Wsub(K,μ)(z) corresponding to negative energy Coulomb functions. The method employed is most appropriate for parameter ranges which commonly occur in atomic and molecular asymptotic scattering problems using a close-coupling approximation in the presence of closed channels. (orig.)

  1. Free energy distribution function of a random Ising ferromagnet

    International Nuclear Information System (INIS)

    Dotsenko, Victor; Klumov, Boris

    2012-01-01

    We study the free energy distribution function of a weakly disordered Ising ferromagnet in terms of the D-dimensional random temperature Ginzburg–Landau Hamiltonian. It is shown that besides the usual Gaussian 'body' this distribution function exhibits non-Gaussian tails both in the paramagnetic and in the ferromagnetic phases. Explicit asymptotic expressions for these tails are derived. It is demonstrated that the tails are strongly asymmetric: the left tail (for large negative values of the free energy) is much slower than the right one (for large positive values of the free energy). It is argued that at the critical point the free energy of the random Ising ferromagnet in dimensions D < 4 is described by a non-trivial universal distribution function which is non-self-averaging

  2. Inferring Parametric Energy Consumption Functions at Different Software Levels

    DEFF Research Database (Denmark)

    Liqat, Umer; Georgiou, Kyriakos; Kerrison, Steve

    2016-01-01

    The static estimation of the energy consumed by program executions is an important challenge, which has applications in program optimization and verification, and is instrumental in energy-aware software development. Our objective is to estimate such energy consumption in the form of functions...... on the input data sizes of programs. We have developed a tool for experimentation with static analysis which infers such energy functions at two levels, the instruction set architecture (ISA) and the intermediate code (LLVM IR) levels, and reflects it upwards to the higher source code level. This required...... the development of a translation from LLVM IR to an intermediate representation and its integration with existing components, a translation from ISA to the same representation, a resource analyzer, an ISA-level energy model, and a mapping from this model to LLVM IR. The approach has been applied to programs...

  3. The non-Gaussian joint probability density function of slope and elevation for a nonlinear gravity wave field. [in ocean surface

    Science.gov (United States)

    Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.

    1984-01-01

    On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.

  4. Escape probabilities for fluorescent x-rays

    International Nuclear Information System (INIS)

    Dance, D.R.; Day, G.J.

    1985-01-01

    Computation of the energy absorption efficiency of an x-ray photon detector involves consideration of the histories of the secondary particles produced in any initial or secondary interaction which may occur within the detector. In particular, the K or higher shell fluorescent x-rays which may be emitted following a photoelectric interaction can carry away a large fraction of the energy of the incident photon, especially if this energy is just above an absorption edge. The effects of such photons cannot be ignored and a correction term, depending upon the probability that the fluorescent x-rays will escape from the detector, must be applied to the energy absorption efficiency. For detectors such as x-ray intensifying screens, it has been usual to calculate this probability by numerical integration. In this note analytic expressions are derived for the escape probability of fluorescent photons from planar detectors in terms of exponential integral functions. Rational approximations for these functions are readily available and these analytic expressions therefore facilitate the computation of photon absorption efficiencies. A table is presented which should obviate the need for calculating the escape probability for most cases of interest. (author)

  5. Total reflection coefficients of low-energy photons presented as universal functions

    Directory of Open Access Journals (Sweden)

    Ljubenov Vladan

    2010-01-01

    Full Text Available The possibility of expressing the total particle and energy reflection coefficients of low-energy photons in the form of universal functions valid for different shielding materials is investigated in this paper. The analysis is based on the results of Monte Carlo simulations of photon reflection by using MCNP, FOTELP, and PENELOPE codes. The normal incidence of the narrow monoenergetic photon beam of the unit intensity and of initial energies from 20 keV up to 100 keV is considered, and particle and energy reflection coefficients from the plane homogenous targets of water, aluminum, and iron are determined and compared. The representations of albedo coefficients on the initial photon energy, on the probability of large-angle photon scattering, and on the mean number of photon scatterings are examined. It is found out that only the rescaled albedo coefficients dependent on the mean number of photon scatterings have the form of universal functions and these functions are determined by applying the least square method.

  6. Functionalization of graphene for efficient energy conversion and storage.

    Science.gov (United States)

    Dai, Liming

    2013-01-15

    As global energy consumption accelerates at an alarming rate, the development of clean and renewable energy conversion and storage systems has become more important than ever. Although the efficiency of energy conversion and storage devices depends on a variety of factors, their overall performance strongly relies on the structure and properties of the component materials. Nanotechnology has opened up new frontiers in materials science and engineering to meet this challenge by creating new materials, particularly carbon nanomaterials, for efficient energy conversion and storage. As a building block for carbon materials of all other dimensionalities (such as 0D buckyball, 1D nanotube, 3D graphite), the two-dimensional (2D) single atomic carbon sheet of graphene has emerged as an attractive candidate for energy applications due to its unique structure and properties. Like other materials, however, a graphene-based material that possesses desirable bulk properties rarely features the surface characteristics required for certain specific applications. Therefore, surface functionalization is essential, and researchers have devised various covalent and noncovalent chemistries for making graphene materials with the bulk and surface properties needed for efficient energy conversion and storage. In this Account, I summarize some of our new ideas and strategies for the controlled functionalization of graphene for the development of efficient energy conversion and storage devices, such as solar cells, fuel cells, supercapacitors, and batteries. The dangling bonds at the edge of graphene can be used for the covalent attachment of various chemical moieties while the graphene basal plane can be modified via either covalent or noncovalent functionalization. The asymmetric functionalization of the two opposite surfaces of individual graphene sheets with different moieties can lead to the self-assembly of graphene sheets into hierarchically structured materials. Judicious

  7. Evaluation of NEB energy markets and supply monitoring function

    International Nuclear Information System (INIS)

    2003-09-01

    Canada's National Energy Board regulates the exports of oil, natural gas, natural gas liquids and electricity. It also regulates the construction, operation and tolls of international and interprovincial pipelines and power lines. It also monitors energy supply and market developments in Canada. The Board commissioned an evaluation of the monitoring function to ensure the effectiveness and efficiency of the monitoring activities, to identify gaps in these activities and to propose recommendations. The objectives of the monitoring mandate are to provide Canadians with information regarding Canadian energy markets, energy supply and demand, and to ensure that exports of natural gas, oil, natural gas liquids and electricity do not occur at the detriment of Canadian energy users. The Board ensures that Canadians have access to domestically produced energy on terms that are as favourable as those available to export buyers. The following recommendations were proposed to improve the monitoring of energy markets and supply: (1) increase focus and analysis on the functioning of gas (first priority) and other commodity markets, (2) increase emphasis on forward-looking market analysis and issue identification, (3) demonstrate continued leadership by encouraging public dialogue on a wide range of energy market issues, (4) improve communication and increase visibility of the NEB within the stakeholder community, (5) build on knowledge management and organizational learning capabilities, (6) improve communication and sharing of information between the Applications and Commodities Business Units, and (7) enhance organizational effectiveness of the Commodities Business Unit. figs

  8. Expected utility with lower probabilities

    DEFF Research Database (Denmark)

    Hendon, Ebbe; Jacobsen, Hans Jørgen; Sloth, Birgitte

    1994-01-01

    An uncertain and not just risky situation may be modeled using so-called belief functions assigning lower probabilities to subsets of outcomes. In this article we extend the von Neumann-Morgenstern expected utility theory from probability measures to belief functions. We use this theory...

  9. Energy functionals for medical image segmentation: choices and consequences

    OpenAIRE

    McIntosh, Christopher

    2011-01-01

    Medical imaging continues to permeate the practice of medicine, but automated yet accurate segmentation and labeling of anatomical structures continues to be a major obstacle to computerized medical image analysis. Though there exists numerous approaches for medical image segmentation, one in particular has gained increasing popularity: energy minimization-based techniques, and the large set of methods encompassed therein. With these techniques an energy function must be chosen, segmentations...

  10. Energy absorption behaviors of nanoporous materials functionalized (NMF) liquids

    OpenAIRE

    Kim, Tae Wan

    2011-01-01

    For many decades, people have been actively investigating high-performance energy absorption materials, so as to develop lightweight and small-sized protective and damping devices, such as blast mitigation helmets, vehicle armors, etc. Recently, the high energy absorption efficiency of nanoporous materials functionalized (NMF) liquids has drawn considerable attention. A NMF liquid is usually a liquid suspension of nanoporous particles with large nanopore surface areas (100 - 2,000 m²/g). The ...

  11. The perception of probability.

    Science.gov (United States)

    Gallistel, C R; Krishan, Monika; Liu, Ye; Miller, Reilly; Latham, Peter E

    2014-01-01

    We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  12. Range and energy functions of interest in neutron dosimetry

    International Nuclear Information System (INIS)

    Bhatia, D.P.; Nagarajan, P.S.

    1978-01-01

    This report documents the energy and range functions generated and used in fast neutron interface dosimetry studies. The basic data of stopping power employed are the most recent. The present report covers a number of media mainly air, oxygen, nitrogen, polythene, graphite, bone and tissue, and a number of charged particles, namely protons, alphas, 9 Be, 11 B, 12 C, 13 C, 14 N and 16 O. These functions would be useful for generation of energy and range values for any of the above particles in any of the above media within +- 1% in any dosimetric calculations. (author)

  13. Kinetic-energy functionals studied by surface calculations

    DEFF Research Database (Denmark)

    Vitos, Levente; Skriver, Hans Lomholt; Kollár, J.

    1998-01-01

    The self-consistent jellium model of metal surfaces is used to study the accuracy of a number of semilocal kinetic-energy functionals for independent particles. It is shown that the poor accuracy exhibited by the gradient expansion approximation and most of the semiempirical functionals in the lo...... density, high gradient limit may be subtantially improved by including locally a von Weizsacker term. Based on this, we propose a simple one-parameter Pade's approximation, which reproduces the exact Kohn-Sham surface kinetic energy over the entire range of metallic densities....

  14. Time-dependent fracture probability of bilayer, lithium-disilicate-based glass-ceramic molar crowns as a function of core/veneer thickness ratio and load orientation

    Science.gov (United States)

    Anusavice, Kenneth J.; Jadaan, Osama M.; Esquivel–Upshaw, Josephine

    2013-01-01

    Recent reports on bilayer ceramic crown prostheses suggest that fractures of the veneering ceramic represent the most common reason for prosthesis failure. Objective The aims of this study were to test the hypotheses that: (1) an increase in core ceramic/veneer ceramic thickness ratio for a crown thickness of 1.6 mm reduces the time-dependent fracture probability (Pf) of bilayer crowns with a lithium-disilicate-based glass-ceramic core, and (2) oblique loading, within the central fossa, increases Pf for 1.6-mm-thick crowns compared with vertical loading. Materials and methods Time-dependent fracture probabilities were calculated for 1.6-mm-thick, veneered lithium-disilicate-based glass-ceramic molar crowns as a function of core/veneer thickness ratio and load orientation in the central fossa area. Time-dependent fracture probability analyses were computed by CARES/Life software and finite element analysis, using dynamic fatigue strength data for monolithic discs of a lithium-disilicate glass-ceramic core (Empress 2), and ceramic veneer (Empress 2 Veneer Ceramic). Results Predicted fracture probabilities (Pf) for centrally-loaded 1,6-mm-thick bilayer crowns over periods of 1, 5, and 10 years are 1.2%, 2.7%, and 3.5%, respectively, for a core/veneer thickness ratio of 1.0 (0.8 mm/0.8 mm), and 2.5%, 5.1%, and 7.0%, respectively, for a core/veneer thickness ratio of 0.33 (0.4 mm/1.2 mm). Conclusion CARES/Life results support the proposed crown design and load orientation hypotheses. Significance The application of dynamic fatigue data, finite element stress analysis, and CARES/Life analysis represent an optimal approach to optimize fixed dental prosthesis designs produced from dental ceramics and to predict time-dependent fracture probabilities of ceramic-based fixed dental prostheses that can minimize the risk for clinical failures. PMID:24060349

  15. Time-dependent fracture probability of bilayer, lithium-disilicate-based, glass-ceramic, molar crowns as a function of core/veneer thickness ratio and load orientation.

    Science.gov (United States)

    Anusavice, Kenneth J; Jadaan, Osama M; Esquivel-Upshaw, Josephine F

    2013-11-01

    Recent reports on bilayer ceramic crown prostheses suggest that fractures of the veneering ceramic represent the most common reason for prosthesis failure. The aims of this study were to test the hypotheses that: (1) an increase in core ceramic/veneer ceramic thickness ratio for a crown thickness of 1.6mm reduces the time-dependent fracture probability (Pf) of bilayer crowns with a lithium-disilicate-based glass-ceramic core, and (2) oblique loading, within the central fossa, increases Pf for 1.6-mm-thick crowns compared with vertical loading. Time-dependent fracture probabilities were calculated for 1.6-mm-thick, veneered lithium-disilicate-based glass-ceramic molar crowns as a function of core/veneer thickness ratio and load orientation in the central fossa area. Time-dependent fracture probability analyses were computed by CARES/Life software and finite element analysis, using dynamic fatigue strength data for monolithic discs of a lithium-disilicate glass-ceramic core (Empress 2), and ceramic veneer (Empress 2 Veneer Ceramic). Predicted fracture probabilities (Pf) for centrally loaded 1.6-mm-thick bilayer crowns over periods of 1, 5, and 10 years are 1.2%, 2.7%, and 3.5%, respectively, for a core/veneer thickness ratio of 1.0 (0.8mm/0.8mm), and 2.5%, 5.1%, and 7.0%, respectively, for a core/veneer thickness ratio of 0.33 (0.4mm/1.2mm). CARES/Life results support the proposed crown design and load orientation hypotheses. The application of dynamic fatigue data, finite element stress analysis, and CARES/Life analysis represent an optimal approach to optimize fixed dental prosthesis designs produced from dental ceramics and to predict time-dependent fracture probabilities of ceramic-based fixed dental prostheses that can minimize the risk for clinical failures. Copyright © 2013 Academy of Dental Materials. All rights reserved.

  16. Exchanging and Storing Energy. Reducing Energy Demand through Heat Exchange between Functions and Temporary Storage

    Energy Technology Data Exchange (ETDEWEB)

    Sillem, E.

    2011-06-15

    As typical office buildings from the nineties have large heating and cooling installations to provide heat or cold wherever and whenever needed, more recent office buildings have almost no demand for heating due to high internal heat loads caused by people, lighting and office appliances and because of the great thermal qualities of the contemporary building envelope. However, these buildings still have vast cooling units to cool down servers and other energy consuming installations. At the same time other functions such as dwellings, swimming pools, sporting facilities, archives and museums still need to be heated most of the year. In the current building market there is an increasing demand for mixed-use buildings or so called hybrid buildings. The Science Business Centre is no different and houses a conference centre, offices, a museum, archives, an exhibition space and a restaurant. From the initial program brief it seemed that the building will simultaneously house functions that need cooling most of the year and functions that will need to be heated the majority of the year. Can this building be equipped with a 'micro heating and cooling network' and where necessary temporarily store energy? With this idea a research proposal was formulated. How can the demand for heating and cooling of the Science Business Centre be reduced by using energy exchange between different kinds of functions and by temporarily storing energy? In conclusion the research led to: four optimized installation concepts; short term energy storage in pavilion concept and museum; energy exchange between the restaurant and archives; energy exchange between the server space and the offices; the majority of heat and cold will be extracted from the soil (long term energy storage); the access heat will be generated by the energy roof; PV cells from the energy roof power all climate installations; a total energy plan for the Science Business Centre; a systematic approach for exchanging

  17. Energy vs. density on paths toward more exact density functionals.

    Science.gov (United States)

    Kepp, Kasper P

    2018-03-14

    Recently, the progression toward more exact density functional theory has been questioned, implying a need for more formal ways to systematically measure progress, i.e. a "path". Here I use the Hohenberg-Kohn theorems and the definition of normality by Burke et al. to define a path toward exactness and "straying" from the "path" by separating errors in ρ and E[ρ]. A consistent path toward exactness involves minimizing both errors. Second, a suitably diverse test set of trial densities ρ' can be used to estimate the significance of errors in ρ without knowing the exact densities which are often inaccessible. To illustrate this, the systems previously studied by Medvedev et al., the first ionization energies of atoms with Z = 1 to 10, the ionization energy of water, and the bond dissociation energies of five diatomic molecules were investigated using CCSD(T)/aug-cc-pV5Z as benchmark at chemical accuracy. Four functionals of distinct designs was used: B3LYP, PBE, M06, and S-VWN. For atomic cations regardless of charge and compactness up to Z = 10, the energy effects of the different ρ are energy-wise insignificant. An interesting oscillating behavior in the density sensitivity is observed vs. Z, explained by orbital occupation effects. Finally, it is shown that even large "normal" problems such as the Co-C bond energy of cobalamins can use simpler (e.g. PBE) trial densities to drastically speed up computation by loss of a few kJ mol -1 in accuracy. The proposed method of using a test set of trial densities to estimate the sensitivity and significance of density errors of functionals may be useful for testing and designing new balanced functionals with more systematic improvement of densities and energies.

  18. Probability Aggregates in Probability Answer Set Programming

    OpenAIRE

    Saad, Emad

    2013-01-01

    Probability answer set programming is a declarative programming that has been shown effective for representing and reasoning about a variety of probability reasoning tasks. However, the lack of probability aggregates, e.g. {\\em expected values}, in the language of disjunctive hybrid probability logic programs (DHPP) disallows the natural and concise representation of many interesting problems. In this paper, we extend DHPP to allow arbitrary probability aggregates. We introduce two types of p...

  19. AN ENERGY FUNCTION APPROACH FOR FINDING ROOTS OF CHARACTERISTIC EQUATION

    OpenAIRE

    Deepak Mishra; Prem K. Kalra

    2011-01-01

    In this paper, an energy function approach for finding roots of a characteristic equation has been proposed. Finding the roots of a characteristics equation is considered as an optimization problem. We demonstrated that this problem can be solved with the application of feedback type neural network. The proposed approach is fast and robust against variation of parameter.

  20. El strength function at high spin and excitation energy

    International Nuclear Information System (INIS)

    Barrette, J.

    1983-04-01

    Recently giant dipole resonance-like concentration of the dipole strength function in nuclei was observed at both high excitation energies and high spins. This observation raises the possibility of obtaining new information on the shape of rapidly rotating heated nuclei. Recent experimental results on this subject are reviewed

  1. Ab initio derivation of model energy density functionals

    International Nuclear Information System (INIS)

    Dobaczewski, Jacek

    2016-01-01

    I propose a simple and manageable method that allows for deriving coupling constants of model energy density functionals (EDFs) directly from ab initio calculations performed for finite fermion systems. A proof-of-principle application allows for linking properties of finite nuclei, determined by using the nuclear nonlocal Gogny functional, to the coupling constants of the quasilocal Skyrme functional. The method does not rely on properties of infinite fermion systems but on the ab initio calculations in finite systems. It also allows for quantifying merits of different model EDFs in describing the ab initio results. (letter)

  2. On the asymptotic evolution of finite energy Airy wave functions.

    Science.gov (United States)

    Chamorro-Posada, P; Sánchez-Curto, J; Aceves, A B; McDonald, G S

    2015-06-15

    In general, there is an inverse relation between the degree of localization of a wave function of a certain class and its transform representation dictated by the scaling property of the Fourier transform. We report that in the case of finite energy Airy wave packets a simultaneous increase in their localization in the direct and transform domains can be obtained as the apodization parameter is varied. One consequence of this is that the far-field diffraction rate of a finite energy Airy beam decreases as the beam localization at the launch plane increases. We analyze the asymptotic properties of finite energy Airy wave functions using the stationary phase method. We obtain one dominant contribution to the long-term evolution that admits a Gaussian-like approximation, which displays the expected reduction of its broadening rate as the input localization is increased.

  3. KIDS Nuclear Energy Density Functional: 1st Application in Nuclei

    Science.gov (United States)

    Gil, Hana; Papakonstantinou, Panagiota; Hyun, Chang Ho; Oh, Yongseok

    We apply the KIDS (Korea: IBS-Daegu-Sungkyunkwan) nuclear energy density functional model, which is based on the Fermi momentum expansion, to the study of properties of lj-closed nuclei. The parameters of the model are determined by the nuclear properties at the saturation density and theoretical calculations on pure neutron matter. For applying the model to the study of nuclei, we rely on the Skyrme force model, where the Skyrme force parameters are determined through the KIDS energy density functional. Solving Hartree-Fock equations, we obtain the energies per particle and charge radii of closed magic nuclei, namely, 16O, 28O, 40Ca, 48Ca, 60Ca, 90Zr, 132Sn, and 208Pb. The results are compared with the observed data and further improvement of the model is shortly mentioned.

  4. Analysis of meteorological droughts and dry spells in semiarid regions: a comparative analysis of probability distribution functions in the Segura Basin (SE Spain)

    Science.gov (United States)

    Pérez-Sánchez, Julio; Senent-Aparicio, Javier

    2017-08-01

    Dry spells are an essential concept of drought climatology that clearly defines the semiarid Mediterranean environment and whose consequences are a defining feature for an ecosystem, so vulnerable with regard to water. The present study was conducted to characterize rainfall drought in the Segura River basin located in eastern Spain, marked by the self seasonal nature of these latitudes. A daily precipitation set has been utilized for 29 weather stations during a period of 20 years (1993-2013). Furthermore, four sets of dry spell length (complete series, monthly maximum, seasonal maximum, and annual maximum) are used and simulated for all the weather stations with the following probability distribution functions: Burr, Dagum, error, generalized extreme value, generalized logistic, generalized Pareto, Gumbel Max, inverse Gaussian, Johnson SB, Log-Logistic, Log-Pearson 3, Triangular, Weibull, and Wakeby. Only the series of annual maximum spell offer a good adjustment for all the weather stations, thereby gaining the role of Wakeby as the best result, with a p value means of 0.9424 for the Kolmogorov-Smirnov test (0.2 significance level). Probability of dry spell duration for return periods of 2, 5, 10, and 25 years maps reveal the northeast-southeast gradient, increasing periods with annual rainfall of less than 0.1 mm in the eastern third of the basin, in the proximity of the Mediterranean slope.

  5. Comment on 'Kinetic energy as a density functional'

    International Nuclear Information System (INIS)

    Holas, A.; March, N.H.

    2002-01-01

    In a recent paper, Nesbet [Phys. Rev. A 65, 010502(R) (2001)] has proposed dropping ''the widespread but unjustified assumption that the existence of a ground-state density functional for the kinetic energy, T s [ρ], of an N-electron system implies the existence of a density-functional derivative, δT s [ρ]/δρ(r), equivalent to a local potential function,'' because, according to his arguments, this derivative 'has the mathematical character of a linear operator that acts on orbital wave functions'. Our Comment demonstrates that the statement called by Nesbet an 'unjustified assumption' happens, in fact, to be a rigorously proven theorem. Therefore, his previous conclusions stemming from his different view of this derivative, which undermined the foundations of density-functional theory, can be discounted

  6. Nuclear response functions at large energy and momentum transfer

    International Nuclear Information System (INIS)

    Bertozzi, W.; Moniz, E.J.; Lourie, R.W.

    1991-01-01

    Quasifree nucleon processes are expected to dominate the nuclear electromagnetic response function for large energy and momentum transfers, i.e., for energy transfers large compared with nuclear single particle energies and momentum transfers large compared with typical nuclear momenta. Despite the evident success of the quasifree picture in providing the basic frame work for discussing and understanding the large energy, large momentum nuclear response, the limits of this picture have also become quite clear. In this article a selected set of inclusive and coincidence data are presented in order to define the limits of the quasifree picture more quantitatively. Specific dynamical mechanisms thought to be important in going beyond the quasifree picture are discussed as well. 75 refs, 37 figs

  7. A brief introduction to probability.

    Science.gov (United States)

    Di Paola, Gioacchino; Bertani, Alessandro; De Monte, Lavinia; Tuzzolino, Fabio

    2018-02-01

    The theory of probability has been debated for centuries: back in 1600, French mathematics used the rules of probability to place and win bets. Subsequently, the knowledge of probability has significantly evolved and is now an essential tool for statistics. In this paper, the basic theoretical principles of probability will be reviewed, with the aim of facilitating the comprehension of statistical inference. After a brief general introduction on probability, we will review the concept of the "probability distribution" that is a function providing the probabilities of occurrence of different possible outcomes of a categorical or continuous variable. Specific attention will be focused on normal distribution that is the most relevant distribution applied to statistical analysis.

  8. Scaling Qualitative Probability

    OpenAIRE

    Burgin, Mark

    2017-01-01

    There are different approaches to qualitative probability, which includes subjective probability. We developed a representation of qualitative probability based on relational systems, which allows modeling uncertainty by probability structures and is more coherent than existing approaches. This setting makes it possible proving that any comparative probability is induced by some probability structure (Theorem 2.1), that classical probability is a probability structure (Theorem 2.2) and that i...

  9. Multi functional roof structures of the energy efficient buildings

    Directory of Open Access Journals (Sweden)

    Krstić Aleksandra

    2006-01-01

    Full Text Available Modern architectural concepts, which are based on rational energy consumption of buildings and the use of solar energy as a renewable energy source, give the new and significant role to the roofs that become multifunctional structures. Various energy efficient roof structures and elements, beside the role of protection, provide thermal and electric energy supply, natural ventilation and cooling of a building, natural lighting of the indoor space sunbeam protection, water supply for technical use, thus according to the above mentioned functions, classification and analysis of such roof structures and elements are made in this paper. The search for new architectural values and optimization in total energy balance of a building or the likewise for the urban complex, gave to roofs the role of "climatic membranes". Contemporary roof forms and materials clearly exemplify their multifunctional features. There are numerous possibilities to achieve the new and attractive roof design which broadens to the whole construction. With such inducement, this paper principally analyze the configuration characteristics of the energy efficient roof structures and elements, as well as the visual effects that may be achieved by their application.

  10. The trapping of potassium atoms by a polycrystalline tungsten surface as a function of energy and angle of incidence. ch. 1

    International Nuclear Information System (INIS)

    Hurkmans, A.; Overbosch, E.G.; Olander, D.R.; Los, J.

    1976-01-01

    The trapping probability of potassium atoms on a polycrystalline tungsten surface has been measured as a function of the angle of incidence and as a function of the energy of the incoming atoms. Below an energy of 1 eV the trapping was complete; above 20 eV only reflection occurred. The trapping probability increased with increasing angle of incidence. The measurements are compared with a simple model of the fraction of atoms initially trapped. The model, a one-dimensional cube model including a Boltzmann distribution of the velocities of oscillating surface atoms, partially explains the data. The trapping probability as a function of incoming energy is well described for normal incidence, justifying the inclusion of thermal motion of the surface atoms in the model. The angular dependence can be explained in a qualitative way, although there is a substantial discrepancy for large angles of incidence, showing the presence of surface structure. (Auth.)

  11. Free energy functionals for polarization fluctuations: Pekar factor revisited.

    Science.gov (United States)

    Dinpajooh, Mohammadhasan; Newton, Marshall D; Matyushov, Dmitry V

    2017-02-14

    The separation of slow nuclear and fast electronic polarization in problems related to electron mobility in polarizable media was considered by Pekar 70 years ago. Within dielectric continuum models, this separation leads to the Pekar factor in the free energy of solvation by the nuclear degrees of freedom. The main qualitative prediction of Pekar's perspective is a significant, by about a factor of two, drop of the nuclear solvation free energy compared to the total (electronic plus nuclear) free energy of solvation. The Pekar factor enters the solvent reorganization energy of electron transfer reactions and is a significant mechanistic parameter accounting for the solvent effect on electron transfer. Here, we study the separation of the fast and slow polarization modes in polar molecular liquids (polarizable dipolar liquids and polarizable water force fields) without relying on the continuum approximation. We derive the nonlocal free energy functional and use atomistic numerical simulations to obtain nonlocal, reciprocal space electronic and nuclear susceptibilities. A consistent transition to the continuum limit is introduced by extrapolating the results of finite-size numerical simulation to zero wavevector. The continuum nuclear susceptibility extracted from the simulations is numerically close to the Pekar factor. However, we derive a new functionality involving the static and high-frequency dielectric constants. The main distinction of our approach from the traditional theories is found in the solvation free energy due to the nuclear polarization: the anticipated significant drop of its magnitude with increasing liquid polarizability does not occur. The reorganization energy of electron transfer is either nearly constant with increasing the solvent polarizability and the corresponding high-frequency dielectric constant (polarizable dipolar liquids) or actually noticeably increases (polarizable force fields of water).

  12. Free energy functionals for polarization fluctuations: Pekar factor revisited

    International Nuclear Information System (INIS)

    Dinpajooh, Mohammadhasan; Newton, Marshall D.; Matyushov, Dmitry V.

    2017-01-01

    The separation of slow nuclear and fast electronic polarization in problems related to electron mobility in polarizable media was considered by Pekar 70 years ago. This separation leads to the Pekar factor in the free energy of solvation by the nuclear degrees of freedom, within dielectric continuum models. The main qualitative prediction of Pekar’s perspective is a significant, by about a factor of two, drop of the nuclear solvation free energy compared to the total (electronic plus nuclear) free energy of solvation. The Pekar factor enters the solvent reorganization energy of electron transfer reactions and is a significant mechanistic parameter accounting for the solvent effect on electron transfer. We study the separation of the fast and slow polarization modes in polar molecular liquids (polarizable dipolar liquids and polarizable water force fields) without relying on the continuum approximation. We derive the nonlocal free energy functional and use atomistic numerical simulations to obtain nonlocal, reciprocal space electronic and nuclear susceptibilities. A consistent transition to the continuum limit is introduced by extrapolating the results of finite-size numerical simulation to zero wavevector. The continuum nuclear susceptibility extracted from the simulations is numerically close to the Pekar factor. But, we derive a new functionality involving the static and high-frequency dielectric constants. The main distinction of our approach from the traditional theories is found in the solvation free energy due to the nuclear polarization: the anticipated significant drop of its magnitude with increasing liquid polarizability does not occur. The reorganization energy of electron transfer is either nearly constant with increasing the solvent polarizability and the corresponding high-frequency dielectric constant (polarizable dipolar liquids) or actually noticeably increases (polarizable force fields of water).

  13. Hydrophilic functionalized silicon nanoparticles produced by high energy ball milling

    Science.gov (United States)

    Hallmann, Steffen

    The mechanochemical synthesis of functionalized silicon nanoparticles using High Energy Ball Milling (HEBM) is described. This method facilitates the fragmentation of mono crystalline silicon into the nanometer regime and the simultaneous surface functionalization of the formed particles. The surface functionalization is induced by the reaction of an organic liquid, such as alkynes and alkenes with reactive silicon sites. This method can be applied to form water soluble silicon nanoparticles by lipid mediated micelle formation and the milling in organic liquids containing molecules with bi-functional groups, such as allyl alcohol. Furthermore, nanometer sized, chloroalkyl functionalized particles can be synthesized by milling the silicon precursor in the presence of an o-chloroalkyne with either alkenes or alkynes as coreactants. This process allows tuning of the concentration of the exposed, alkyl linked chloro groups, simply by varying the relative amounts of the coreactant. The silicon nanoparticles that are formed serve as the starting point for a wide variety of chemical reactions, which may be used to alter the surface properties of the functionalized nanoparticles. Finally, the use of functionalized silicon particles for the production of superhydrophobic films is described. Here HEBM proves to be an efficient method to produce functionalized silicon particles, which can be deposited to form a stable coating exhibiting superhydrophobic properties. The hydrophobicity of the silicon film can be tuned by the milling time and thus the resulting surface roughness of the films.

  14. Many-body theory and Energy Density Functionals

    Energy Technology Data Exchange (ETDEWEB)

    Baldo, M. [INFN, Catania (Italy)

    2016-07-15

    In this paper a method is first presented to construct an Energy Density Functional on a microscopic basis. The approach is based on the Kohn-Sham method, where one introduces explicitly the Nuclear Matter Equation of State, which can be obtained by an accurate many-body calculation. In this way it connects the functional to the bare nucleon-nucleon interaction. It is shown that the resulting functional can be performing as the best Gogny force functional. In the second part of the paper it is shown how one can go beyond the mean-field level and the difficulty that can appear. The method is based on the particle-vibration coupling scheme and a formalism is presented that can handle the correct use of the vibrational degrees of freedom within a microscopic approach. (orig.)

  15. Quadrupole collective dynamics from energy density functionals: Collective Hamiltonian and the interacting boson model

    International Nuclear Information System (INIS)

    Nomura, K.; Vretenar, D.; Niksic, T.; Otsuka, T.; Shimizu, N.

    2011-01-01

    Microscopic energy density functionals have become a standard tool for nuclear structure calculations, providing an accurate global description of nuclear ground states and collective excitations. For spectroscopic applications, this framework has to be extended to account for collective correlations related to restoration of symmetries broken by the static mean field, and for fluctuations of collective variables. In this paper, we compare two approaches to five-dimensional quadrupole dynamics: the collective Hamiltonian for quadrupole vibrations and rotations and the interacting boson model (IBM). The two models are compared in a study of the evolution of nonaxial shapes in Pt isotopes. Starting from the binding energy surfaces of 192,194,196 Pt, calculated with a microscopic energy density functional, we analyze the resulting low-energy collective spectra obtained from the collective Hamiltonian, and the corresponding IBM Hamiltonian. The calculated excitation spectra and transition probabilities for the ground-state bands and the γ-vibration bands are compared to the corresponding sequences of experimental states.

  16. Efficient modified Jacobi relaxation for minimizing the energy functional

    International Nuclear Information System (INIS)

    Park, C.H.; Lee, I.; Chang, K.J.

    1993-01-01

    We present an efficient scheme of diagonalizing large Hamiltonian matrices in a self-consistent manner. In the framework of the preconditioned conjugate gradient minimization of the energy functional, we replace the modified Jacobi relaxation for preconditioning and use for band-by-band minimization the restricted-block Davidson algorithm, in which only the previous wave functions and the relaxation vectors are included additionally for subspace diagonalization. Our scheme is found to be comparable with the preconditioned conjugate gradient method for both large ordered and disordered Si systems, while it is more rapidly converged for systems with transition-metal elements

  17. Trivial constraints on orbital-free kinetic energy density functionals

    Science.gov (United States)

    Luo, Kai; Trickey, S. B.

    2018-03-01

    Approximate kinetic energy density functionals (KEDFs) are central to orbital-free density functional theory. Limitations on the spatial derivative dependencies of KEDFs have been claimed from differential virial theorems. We identify a central defect in the argument: the relationships are not true for an arbitrary density but hold only for the minimizing density and corresponding chemical potential. Contrary to the claims therefore, the relationships are not constraints and provide no independent information about the spatial derivative dependencies of approximate KEDFs. A simple argument also shows that validity for arbitrary v-representable densities is not restored by appeal to the density-potential bijection.

  18. Bayesian Probability Theory

    Science.gov (United States)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  19. Communicating Low-Probability High-Consequence Risk, Uncertainty and Expert Confidence: Induced Seismicity of Deep Geothermal Energy and Shale Gas.

    Science.gov (United States)

    Knoblauch, Theresa A K; Stauffacher, Michael; Trutnevyte, Evelina

    2018-04-01

    Subsurface energy activities entail the risk of induced seismicity including low-probability high-consequence (LPHC) events. For designing respective risk communication, the scientific literature lacks empirical evidence of how the public reacts to different written risk communication formats about such LPHC events and to related uncertainty or expert confidence. This study presents findings from an online experiment (N = 590) that empirically tested the public's responses to risk communication about induced seismicity and to different technology frames, namely deep geothermal energy (DGE) and shale gas (between-subject design). Three incrementally different formats of written risk communication were tested: (i) qualitative, (ii) qualitative and quantitative, and (iii) qualitative and quantitative with risk comparison. Respondents found the latter two the easiest to understand, the most exact, and liked them the most. Adding uncertainty and expert confidence statements made the risk communication less clear, less easy to understand and increased concern. Above all, the technology for which risks are communicated and its acceptance mattered strongly: respondents in the shale gas condition found the identical risk communication less trustworthy and more concerning than in the DGE conditions. They also liked the risk communication overall less. For practitioners in DGE or shale gas projects, the study shows that the public would appreciate efforts in describing LPHC risks with numbers and optionally risk comparisons. However, there seems to be a trade-off between aiming for transparency by disclosing uncertainty and limited expert confidence, and thereby decreasing clarity and increasing concern in the view of the public. © 2017 Society for Risk Analysis.

  20. The electron energy distribution function of noble gases with flow

    International Nuclear Information System (INIS)

    Karditsas, P.J.

    1989-01-01

    The treatment of the Boltzmann equation by several investigators, for the determination of the electron energy distribution function (EEDF) in noble gases was restricted to static discharges. It is of great interest to magnetoplasmadynamic power generation to develop the Boltzmann equation to account for the effect of the bulk fluid flow on the EEDF. The two term expansion of the Boltzmann equation, as given, results in additional terms introduced to the equations due to the bulk fluid flow, with velocity u

  1. New evolution equations for the joint response-excitation probability density function of stochastic solutions to first-order nonlinear PDEs

    Science.gov (United States)

    Venturi, D.; Karniadakis, G. E.

    2012-08-01

    By using functional integral methods we determine new evolution equations satisfied by the joint response-excitation probability density function (PDF) associated with the stochastic solution to first-order nonlinear partial differential equations (PDEs). The theory is presented for both fully nonlinear and for quasilinear scalar PDEs subject to random boundary conditions, random initial conditions or random forcing terms. Particular applications are discussed for the classical linear and nonlinear advection equations and for the advection-reaction equation. By using a Fourier-Galerkin spectral method we obtain numerical solutions of the proposed response-excitation PDF equations. These numerical solutions are compared against those obtained by using more conventional statistical approaches such as probabilistic collocation and multi-element probabilistic collocation methods. It is found that the response-excitation approach yields accurate predictions of the statistical properties of the system. In addition, it allows to directly ascertain the tails of probabilistic distributions, thus facilitating the assessment of rare events and associated risks. The computational cost of the response-excitation method is order magnitudes smaller than the one of more conventional statistical approaches if the PDE is subject to high-dimensional random boundary or initial conditions. The question of high-dimensionality for evolution equations involving multidimensional joint response-excitation PDFs is also addressed.

  2. Application of an excited state LDA exchange energy functional for the calculation of transition energy of atoms within time-independent density functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Shamim, Md; Harbola, Manoj K, E-mail: sami@iitk.ac.i, E-mail: mkh@iitk.ac.i [Department of Physics, Indian Institute of Technology, Kanpur 208 016 (India)

    2010-11-14

    Transition energies of a new class of excited states (two-gap systems) of various atoms are calculated in time-independent density functional formalism by using a recently proposed local density approximation exchange energy functional for excited states. It is shown that the excitation energies calculated with this functional compare well with those calculated with exact exchange theories.

  3. Application of an excited state LDA exchange energy functional for the calculation of transition energy of atoms within time-independent density functional theory

    International Nuclear Information System (INIS)

    Shamim, Md; Harbola, Manoj K

    2010-01-01

    Transition energies of a new class of excited states (two-gap systems) of various atoms are calculated in time-independent density functional formalism by using a recently proposed local density approximation exchange energy functional for excited states. It is shown that the excitation energies calculated with this functional compare well with those calculated with exact exchange theories.

  4. On Probability Leakage

    OpenAIRE

    Briggs, William M.

    2012-01-01

    The probability leakage of model M with respect to evidence E is defined. Probability leakage is a kind of model error. It occurs when M implies that events $y$, which are impossible given E, have positive probability. Leakage does not imply model falsification. Models with probability leakage cannot be calibrated empirically. Regression models, which are ubiquitous in statistical practice, often evince probability leakage.

  5. A new empirical potential energy function for Ar2

    Science.gov (United States)

    Myatt, Philip T.; Dham, Ashok K.; Chandrasekhar, Pragna; McCourt, Frederick R. W.; Le Roy, Robert J.

    2018-06-01

    A critical re-analysis of all available spectroscopic and virial coefficient data for Ar2 has been used to determine an improved empirical analytic potential energy function that has been 'tuned' to optimise its agreement with viscosity, diffusion and thermal diffusion data, and whose short-range behaviour is in reasonably good agreement with the most recent ab initio calculations for this system. The recommended Morse/long-range potential function is smooth and differentiable at all distances, and incorporates both the correct theoretically predicted long-range behaviour and the correct limiting short-range functional behaviour. The resulting value of the well depth is ? cm-1 and the associated equilibrium distance is re = 3.766 (±0.002) Å, while the 40Ar s-wave scattering length is -714 Å.

  6. Energy demand with the flexible double-logarithmic functional form

    International Nuclear Information System (INIS)

    Nan, G.D.; Murry, D.A.

    1992-01-01

    A flexible double-logarithmic function form is developed to meet assumptions of consumer behavior. Then annual residential and commercial data (1970-87) are applied to this functional form to examine demand for petroleum products, electricity, and natural gas in California. The traditional double log-linear functional form has shortcomings of constant elasticities. The regression equations in this study, with varied estimated elasticities, overcome some of these shortcomings. All short-run own-price elasticities are inelastic and all income elasticities are close to unity in this study. According to the short-run time-trend elasticities, consumers' fuel preference in California is electricity. The long-run income elasticities also indicate that the residential consumers will consume more electricity and natural gas as their energy budgets increase in the long run. 14 refs., 5 tabs

  7. Alternative definitions of the frozen energy in energy decomposition analysis of density functional theory calculations.

    Science.gov (United States)

    Horn, Paul R; Head-Gordon, Martin

    2016-02-28

    In energy decomposition analysis (EDA) of intermolecular interactions calculated via density functional theory, the initial supersystem wavefunction defines the so-called "frozen energy" including contributions such as permanent electrostatics, steric repulsions, and dispersion. This work explores the consequences of the choices that must be made to define the frozen energy. The critical choice is whether the energy should be minimized subject to the constraint of fixed density. Numerical results for Ne2, (H2O)2, BH3-NH3, and ethane dissociation show that there can be a large energy lowering associated with constant density orbital relaxation. By far the most important contribution is constant density inter-fragment relaxation, corresponding to charge transfer (CT). This is unwanted in an EDA that attempts to separate CT effects, but it may be useful in other contexts such as force field development. An algorithm is presented for minimizing single determinant energies at constant density both with and without CT by employing a penalty function that approximately enforces the density constraint.

  8. Probability 1/e

    Science.gov (United States)

    Koo, Reginald; Jones, Martin L.

    2011-01-01

    Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.

  9. Impact of energy efficiency gains on output and energy use with Cobb-Douglas production function

    International Nuclear Information System (INIS)

    Wei Taoyuan

    2007-01-01

    A special issue of Energy Policy-28 (2000)-was devoted to a collection of papers, edited by Dr. Lee Schipper. The collection included a paper entitled 'A view from the macro side: rebound, backfire, and Khazzoom-Brookes' in which it was argued that the impact of fuel efficiency gains on output (roughly, GDP) is likely to be relatively small by Cobb-Douglas production function. However, an error in the analysis leads to under-estimation of the long-term impact. This paper first provides a partial equilibrium analysis by an alternative method for the same case and then proceeds to an analysis on the issue in a two-sector general equilibrium system. In the latter analysis, energy price is internalized. Both energy use efficiency and energy production efficiency are involved

  10. Continuous-energy adjoint flux and perturbation calculation using the iterated fission probability method in Monte-Carlo code TRIPOLI-4 and underlying applications

    International Nuclear Information System (INIS)

    Truchet, G.; Leconte, P.; Peneliau, Y.; Santamarina, A.

    2013-01-01

    The first goal of this paper is to present an exact method able to precisely evaluate very small reactivity effects with a Monte Carlo code (<10 pcm). it has been decided to implement the exact perturbation theory in TRIPOLI-4 and, consequently, to calculate a continuous-energy adjoint flux. The Iterated Fission Probability (IFP) method was chosen because it has shown great results in some other Monte Carlo codes. The IFP method uses a forward calculation to compute the adjoint flux, and consequently, it does not rely on complex code modifications but on the physical definition of the adjoint flux as a phase-space neutron importance. In the first part of this paper, the IFP method implemented in TRIPOLI-4 is described. To illustrate the efficiency of the method, several adjoint fluxes are calculated and compared with their equivalent obtained by the deterministic code APOLLO-2. The new implementation can calculate angular adjoint flux. In the second part, a procedure to carry out an exact perturbation calculation is described. A single cell benchmark has been used to test the accuracy of the method, compared with the 'direct' estimation of the perturbation. Once again the method based on the IFP shows good agreement for a calculation time far more inferior to the 'direct' method. The main advantage of the method is that the relative accuracy of the reactivity variation does not depend on the magnitude of the variation itself, which allows us to calculate very small reactivity perturbations with high precision. It offers the possibility to split reactivity contributions on both isotopes and reactions. Other applications of this perturbation method are presented and tested like the calculation of exact kinetic parameters (βeff, Λeff) or sensitivity parameters

  11. Sensor Fusion Based on an Integrated Neural Network and Probability Density Function (PDF) Dual Kalman Filter for On-Line Estimation of Vehicle Parameters and States.

    Science.gov (United States)

    Vargas-Melendez, Leandro; Boada, Beatriz L; Boada, Maria Jesus L; Gauchia, Antonio; Diaz, Vicente

    2017-04-29

    Vehicles with a high center of gravity (COG), such as light trucks and heavy vehicles, are prone to rollover. This kind of accident causes nearly 33 % of all deaths from passenger vehicle crashes. Nowadays, these vehicles are incorporating roll stability control (RSC) systems to improve their safety. Most of the RSC systems require the vehicle roll angle as a known input variable to predict the lateral load transfer. The vehicle roll angle can be directly measured by a dual antenna global positioning system (GPS), but it is expensive. For this reason, it is important to estimate the vehicle roll angle from sensors installed onboard in current vehicles. On the other hand, the knowledge of the vehicle's parameters values is essential to obtain an accurate vehicle response. Some of vehicle parameters cannot be easily obtained and they can vary over time. In this paper, an algorithm for the simultaneous on-line estimation of vehicle's roll angle and parameters is proposed. This algorithm uses a probability density function (PDF)-based truncation method in combination with a dual Kalman filter (DKF), to guarantee that both vehicle's states and parameters are within bounds that have a physical meaning, using the information obtained from sensors mounted on vehicles. Experimental results show the effectiveness of the proposed algorithm.

  12. Time-averaged probability density functions of soot nanoparticles along the centerline of a piloted turbulent diffusion flame using a scanning mobility particle sizer

    KAUST Repository

    Chowdhury, Snehaunshu

    2017-01-23

    In this study, we demonstrate the use of a scanning mobility particle sizer (SMPS) as an effective tool to measure the probability density functions (PDFs) of soot nanoparticles in turbulent flames. Time-averaged soot PDFs necessary for validating existing soot models are reported at intervals of ∆x/D∆x/D = 5 along the centerline of turbulent, non-premixed, C2H4/N2 flames. The jet exit Reynolds numbers of the flames investigated were 10,000 and 20,000. A simplified burner geometry based on a published design was chosen to aid modelers. Soot was sampled directly from the flame using a sampling probe with a 0.5-mm diameter orifice and diluted with N2 by a two-stage dilution process. The overall dilution ratio was not evaluated. An SMPS system was used to analyze soot particle concentrations in the diluted samples. Sampling conditions were optimized over a wide range of dilution ratios to eliminate the effect of agglomeration in the sampling probe. Two differential mobility analyzers (DMAs) with different size ranges were used separately in the SMPS measurements to characterize the entire size range of particles. In both flames, the PDFs were found to be mono-modal in nature near the jet exit. Further downstream, the profiles were flatter with a fall-off at larger particle diameters. The geometric mean of the soot size distributions was less than 10 nm for all cases and increased monotonically with axial distance in both flames.

  13. 概率密度函数法研究重构吸引子的结构%Probability Density Function Method for Observing Reconstructed Attractor Structure

    Institute of Scientific and Technical Information of China (English)

    陆宏伟; 陈亚珠; 卫青

    2004-01-01

    Probability density function (PDF) method is proposed for analysing the structure of the reconstructed attractor in computing the correlation dimensions of RR intervals of ten normal old men.PDF contains important information about the spatial distribution of the phase points in the reconstructed attractor.To the best of our knowledge, it is the first time that the PDF method is put forward for the analysis of the reconstructed attractor structure.Numerical simulations demonstrate that the cardiac systems of healthy old men are about 6-6.5 dimensional complex dynamical systems.It is found that PDF is not symmetrically distributed when time delay is small, while PDF satisfies Gaussian distribution when time delay is big enough.A cluster effect mechanism is presented to explain this phenomenon.By studying the shape of PDFs, that the roles played by time delay are more important than embedding dimension in the reconstruction is clearly indicated.Results have demonstrated that the PDF method represents a promising numerical approach for the observation of the reconstructed attractor structure and may provide more information and new diagnostic potential of the analyzed cardiac system.

  14. Investigation of photon detection probability dependence of SPADnet-I digital photon counter as a function of angle of incidence, wavelength and polarization

    Energy Technology Data Exchange (ETDEWEB)

    Játékos, Balázs, E-mail: jatekosb@eik.bme.hu; Ujhelyi, Ferenc; Lőrincz, Emőke; Erdei, Gábor

    2015-01-01

    SPADnet-I is a prototype, fully digital, high spatial and temporal resolution silicon photon counter, based on standard CMOS imaging technology, developed by the SPADnet consortium. Being a novel device, the exact dependence of photon detection probability (PDP) of SPADnet-I was not known as a function of angle of incidence, wavelength and polarization of the incident light. Our targeted application area of this sensor is next generation PET detector modules, where they will be used along with LYSO:Ce scintillators. Hence, we performed an extended investigation of PDP in a wide range of angle of incidence (0° to 80°), concentrating onto a 60 nm broad wavelength interval around the characteristic emission peak (λ=420 nm) of the scintillator. In the case where the sensor was optically coupled to a scintillator, our experiments showed a notable dependence of PDP on angle, polarization and wavelength. The sensor has an average PDP of approximately 30% from 0° to 60° angle of incidence, where it starts to drop rapidly. The PDP turned out not to be polarization dependent below 30°. If the sensor is used without a scintillator (i.e. the light source is in air), the polarization dependence is much less expressed, it begins only from 50°.

  15. Consequences of wave function orthogonality for medium energy nuclear reactions

    International Nuclear Information System (INIS)

    Noble, J.V.

    1978-01-01

    In the usual models of high-energy bound-state to continuum transitions no account is taken of the orthogonality of the bound and continuum wave functions. This orthogonality induces considerable cancellations in the overlap integrals expressing the transition amplitudes for reactions such as (e,e'p), (γ,p), and (π,N), which are simply not included in the distorted-wave Born-approximation calculations which to date remain the only computationally feasible heirarchy of approximations. The object of this paper is to present a new formulation of the bound-state to continuum transition problem, based upon flux conservation, in which the orthogonality of wave functions is taken into account ab initio. The new formulation, while exact if exact wave functions are used, offers the possibility of using approximate wave functions for the continuum states without doing violence to the cancellations induced by orthogonality. The method is applied to single-particle states obeying the Schroedinger and Dirac equations, as well as to a coupled-channel model in which absorptive processes can be described in a fully consistent manner. Several types of absorption vertex are considered, and in the (π,N) case the equivalence of pseudoscalar and pseudovector πNN coupling is seen to follow directly from wave function orthogonality

  16. Descriptions of carbon isotopes within the energy density functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Ismail, Atef [Fundamental and Applied Sciences Department, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak, Malaysia and Department of Physics, Al-Azhar University, 71524 Assiut (Egypt); Cheong, Lee Yen; Yahya, Noorhana [Fundamental and Applied Sciences Department, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak (Malaysia); Tammam, M. [Department of Physics, Al-Azhar University, 71524 Assiut (Egypt)

    2014-10-24

    Within the energy density functional (EDF) theory, the structure properties of Carbon isotopes are systematically studied. The shell model calculations are done for both even-A and odd-A nuclei, to study the structure of rich-neutron Carbon isotopes. The EDF theory indicates the single-neutron halo structures in {sup 15}C, {sup 17}C and {sup 19}C, and the two-neutron halo structures in {sup 16}C and {sup 22}C nuclei. It is also found that close to the neutron drip-line, there exist amazing increase in the neutron radii and decrease on the binding energies BE, which are tightly related with the blocking effect and correspondingly the blocking effect plays a significant role in the shell model configurations.

  17. Descriptions of carbon isotopes within the energy density functional theory

    International Nuclear Information System (INIS)

    Ismail, Atef; Cheong, Lee Yen; Yahya, Noorhana; Tammam, M.

    2014-01-01

    Within the energy density functional (EDF) theory, the structure properties of Carbon isotopes are systematically studied. The shell model calculations are done for both even-A and odd-A nuclei, to study the structure of rich-neutron Carbon isotopes. The EDF theory indicates the single-neutron halo structures in 15 C, 17 C and 19 C, and the two-neutron halo structures in 16 C and 22 C nuclei. It is also found that close to the neutron drip-line, there exist amazing increase in the neutron radii and decrease on the binding energies BE, which are tightly related with the blocking effect and correspondingly the blocking effect plays a significant role in the shell model configurations

  18. Statistical measurement of the gamma-ray source-count distribution as a function of energy

    Science.gov (United States)

    Zechlin, H.-S.; Cuoco, A.; Donato, F.; Fornengo, N.; Regis, M.

    2017-01-01

    Photon counts statistics have recently been proven to provide a sensitive observable for characterizing gamma-ray source populations and for measuring the composition of the gamma-ray sky. In this work, we generalize the use of the standard 1-point probability distribution function (1pPDF) to decompose the high-latitude gamma-ray emission observed with Fermi-LAT into: (i) point-source contributions, (ii) the Galactic foreground contribution, and (iii) a diffuse isotropic background contribution. We analyze gamma-ray data in five adjacent energy bands between 1 and 171 GeV. We measure the source-count distribution dN/dS as a function of energy, and demonstrate that our results extend current measurements from source catalogs to the regime of so far undetected sources. Our method improves the sensitivity for resolving point-source populations by about one order of magnitude in flux. The dN/dS distribution as a function of flux is found to be compatible with a broken power law. We derive upper limits on further possible breaks as well as the angular power of unresolved sources. We discuss the composition of the gamma-ray sky and capabilities of the 1pPDF method.

  19. Non-empirical energy density functional for the nuclear structure

    International Nuclear Information System (INIS)

    Rot ival, V.

    2008-09-01

    The energy density functional (EDF) formalism is the tool of choice for large-scale low-energy nuclear structure calculations both for stable experimentally known nuclei whose properties are accurately reproduced and systems that are only theoretically predicted. We highlight in the present dissertation the capability of EDF methods to tackle exotic phenomena appearing at the very limits of stability, that is the formation of nuclear halos. We devise a new quantitative and model-independent method that characterizes the existence and properties of halos in medium- to heavy-mass nuclei, and quantifies the impact of pairing correlations and the choice of the energy functional on the formation of such systems. These results are found to be limited by the predictive power of currently-used EDFs that rely on fitting to known experimental data. In the second part of this dissertation, we initiate the construction of non-empirical EDFs that make use of the new paradigm for vacuum nucleon-nucleon interactions set by so-called low-momentum interactions generated through the application of renormalization group techniques. These soft-core vacuum potentials are used as a step-stone of a long-term strategy which connects modern many-body techniques and EDF methods. We provide guidelines for designing several non-empirical models that include in-medium many-body effects at various levels of approximation, and can be handled in state-of-the art nuclear structure codes. In the present work, the first step is initiated through the adjustment of an operator representation of low-momentum vacuum interactions using a custom-designed parallel evolutionary algorithm. The first results highlight the possibility to grasp most of the relevant physics for low-energy nuclear structure using this numerically convenient Gaussian vertex. (author)

  20. Rydberg energies using excited state density functional theory

    International Nuclear Information System (INIS)

    Cheng, C.-L.; Wu Qin; Van Voorhis, Troy

    2008-01-01

    We utilize excited state density functional theory (eDFT) to study Rydberg states in atoms. We show both analytically and numerically that semilocal functionals can give quite reasonable Rydberg energies from eDFT, even in cases where time dependent density functional theory (TDDFT) fails catastrophically. We trace these findings to the fact that in eDFT the Kohn-Sham potential for each state is computed using the appropriate excited state density. Unlike the ground state potential, which typically falls off exponentially, the sequence of excited state potentials has a component that falls off polynomially with distance, leading to a Rydberg-type series. We also address the rigorous basis of eDFT for these systems. Perdew and Levy have shown using the constrained search formalism that every stationary density corresponds, in principle, to an exact stationary state of the full many-body Hamiltonian. In the present context, this means that the excited state DFT solutions are rigorous as long as they deliver the minimum noninteracting kinetic energy for the given density. We use optimized effective potential techniques to show that, in some cases, the eDFT Rydberg solutions appear to deliver the minimum kinetic energy because the associated density is not pure state v-representable. We thus find that eDFT plays a complementary role to constrained DFT: The former works only if the excited state density is not the ground state of some potential while the latter applies only when the density is a ground state density.

  1. Probability distribution of dose rates in the body tissue as a function of the rhytm of Sr90 administration and the age of animals

    International Nuclear Information System (INIS)

    Rasin, I.M.; Sarapul'tsev, I.A.

    1975-01-01

    The probability distribution of tissue radiation doses in the skeleton were studied in experiments on swines and dogs. When introducing Sr-90 into the organism from the day of birth till 90 days dose rate probability distribution is characterized by one, or, for adult animals, by two independent aggregates. Each of these aggregates correspond to the normal distribution law

  2. Energy coupling function and solar wind-magnetosphere dynamo

    International Nuclear Information System (INIS)

    Kan, J.R.; Lee, L.C.

    1979-01-01

    The power delivered by the solar wind dynamo to the open magnetosphere is calculated based on the concept of field line reconnection, independent of the MHD steady reconnection theories. By recognizing a previously overlooked geometrical relationship between the reconnection electric field and the magnetic field, the calculated power is shown to be approximately proportional to the Akasofu-Perreault energy coupling function for the magnetospheric substorm. In addition to the polar cap potential, field line reconnection also gives rise to parallel electric fields on open field lines in the high-latitude cusp and the polar cap reions

  3. Relativistic Energy Density Functionals: Exotic modes of excitation

    International Nuclear Information System (INIS)

    Vretenar, D.; Paar, N.; Marketin, T.

    2008-01-01

    The framework of relativistic energy density functionals has been applied to the description of a variety of nuclear structure phenomena, not only in spherical and deformed nuclei along the valley of β-stability, but also in exotic systems with extreme isospin values and close to the particle drip-lines. Dynamical aspects of exotic nuclear structure have been investigated with the relativistic quasiparticle random-phase approximation. We present results for the evolution of low-lying dipole (pygmy) strength in neutron-rich nuclei, and charged-current neutrino-nucleus cross sections.

  4. Strict calculation of electron energy distribution functions in inhomogeneous plasmas

    International Nuclear Information System (INIS)

    Winkler, R.

    1996-01-01

    It is objective of the paper to report on strict calculations of the velocity or energy distribution function function and related macroscopic properties of the electrons from appropriate electron kinetic equations under various plasma conditions and to contribute to a better understanding of the electron behaviour in inhomogeneous plasma regions. In particular, the spatial relaxation of plasma electrons acted upon by uniform electric fields, the response of plasma electrons on spatial disturbances of the electric field, the electron kinetics under the impact of space charge field confinement in the dc column plasma and the electron velocity distribution is stronger field as occurring in the electrode regions of a dc glow discharge is considered. (author)

  5. Building a universal nuclear energy density functional (UNEDF)

    Energy Technology Data Exchange (ETDEWEB)

    Nazarewicz, Witold [Univ. of Tennessee, Knoxville, TN (United States)

    2012-07-01

    The long-term vision initiated with UNEDF is to arrive at a comprehensive, quantitative, and unified description of nuclei and their reactions, grounded in the fundamental interactions between the constituent nucleons. We seek to replace current phenomenological models of nuclear structure and reactions with a well-founded microscopic theory that delivers maximum predictive power with well-quantified uncertainties. Specifically, the mission of this project has been three-fold: First, to find an optimal energy density functional (EDF) using all our knowledge of the nucleonic Hamiltonian and basic nuclear properties. Second, to apply the EDF theory and its extensions to validate the functional using all the available relevant nuclear structure and reaction data. Third, to apply the validated theory to properties of interest that cannot be measured, in particular the properties needed for reaction theory.

  6. Structures and potential energy functions of Pu3 molecule

    International Nuclear Information System (INIS)

    Meng Daqiao; Jiang Gang; Liu Xiaoya; Luo Deli; Zhu Zhenghe

    2001-01-01

    Density functional (B3LYP) method with relativistic effective core potential (RECP) has been used to optimize the structures of Pu 2 and Pu 3 molecules. The results show that the ground states of Pu 2 and Pu 3 molecules are of D ∞h and D 3h symmetry, and of 13 and 19 fold, respectively. The spectral constants of Pu 2 , ω e = 52.3845 cm -1 and ω e x e = 0.0201 cm -1 , and the harmonic frequencies of Pu 3 , ν 1 = 56.9007 cm -1 , ν 2 = 57.1816 cm - '1 and ν 3 = 64.0785 cm -1 , have also been obtained on the B3LYP/RECP level. The potential energy functions of Pu 2 and Pu 3 have been derived, for the first time so far as known, from normal equation fitting and the many-body expansion theory

  7. Energy and enthalpy distribution functions for a few physical systems.

    Science.gov (United States)

    Wu, K L; Wei, J H; Lai, S K; Okabe, Y

    2007-08-02

    The present work is devoted to extracting the energy or enthalpy distribution function of a physical system from the moments of the distribution using the maximum entropy method. This distribution theory has the salient traits that it utilizes only the experimental thermodynamic data. The calculated distribution functions provide invaluable insight into the state or phase behavior of the physical systems under study. As concrete evidence, we demonstrate the elegance of the distribution theory by studying first a test case of a two-dimensional six-state Potts model for which simulation results are available for comparison, then the biphasic behavior of the binary alloy Na-K whose excess heat capacity, experimentally observed to fall in a narrow temperature range, has yet to be clarified theoretically, and finally, the thermally induced state behavior of a collection of 16 proteins.

  8. Toward a generalized probability theory: conditional probabilities

    International Nuclear Information System (INIS)

    Cassinelli, G.

    1979-01-01

    The main mathematical object of interest in the quantum logic approach to the foundations of quantum mechanics is the orthomodular lattice and a set of probability measures, or states, defined by the lattice. This mathematical structure is studied per se, independently from the intuitive or physical motivation of its definition, as a generalized probability theory. It is thought that the building-up of such a probability theory could eventually throw light on the mathematical structure of Hilbert-space quantum mechanics as a particular concrete model of the generalized theory. (Auth.)

  9. Density dependence of the nuclear energy-density functional

    Science.gov (United States)

    Papakonstantinou, Panagiota; Park, Tae-Sun; Lim, Yeunhwan; Hyun, Chang Ho

    2018-01-01

    Background: The explicit density dependence in the coupling coefficients entering the nonrelativistic nuclear energy-density functional (EDF) is understood to encode effects of three-nucleon forces and dynamical correlations. The necessity for the density-dependent coupling coefficients to assume the form of a preferably small fractional power of the density ρ is empirical and the power is often chosen arbitrarily. Consequently, precision-oriented parametrizations risk overfitting in the regime of saturation and extrapolations in dilute or dense matter may lose predictive power. Purpose: Beginning with the observation that the Fermi momentum kF, i.e., the cubic root of the density, is a key variable in the description of Fermi systems, we first wish to examine if a power hierarchy in a kF expansion can be inferred from the properties of homogeneous matter in a domain of densities, which is relevant for nuclear structure and neutron stars. For subsequent applications we want to determine a functional that is of good quality but not overtrained. Method: For the EDF, we fit systematically polynomial and other functions of ρ1 /3 to existing microscopic, variational calculations of the energy of symmetric and pure neutron matter (pseudodata) and analyze the behavior of the fits. We select a form and a set of parameters, which we found robust, and examine the parameters' naturalness and the quality of resulting extrapolations. Results: A statistical analysis confirms that low-order terms such as ρ1 /3 and ρ2 /3 are the most relevant ones in the nuclear EDF beyond lowest order. It also hints at a different power hierarchy for symmetric vs. pure neutron matter, supporting the need for more than one density-dependent term in nonrelativistic EDFs. The functional we propose easily accommodates known or adopted properties of nuclear matter near saturation. More importantly, upon extrapolation to dilute or asymmetric matter, it reproduces a range of existing microscopic

  10. Probability of Failure in Random Vibration

    DEFF Research Database (Denmark)

    Nielsen, Søren R.K.; Sørensen, John Dalsgaard

    1988-01-01

    Close approximations to the first-passage probability of failure in random vibration can be obtained by integral equation methods. A simple relation exists between the first-passage probability density function and the distribution function for the time interval spent below a barrier before out......-crossing. An integral equation for the probability density function of the time interval is formulated, and adequate approximations for the kernel are suggested. The kernel approximation results in approximate solutions for the probability density function of the time interval and thus for the first-passage probability...

  11. Measurement of the nucleon structure function using high energy muons

    International Nuclear Information System (INIS)

    Meyers, P.D.

    1983-12-01

    We have measured the inclusive deep inelastic scattering of muons on nucleons in iron using beams of 93 and 215 GeV muons. To perform this measurement, we have built and operated the Multimuon Spectrometer (MMS) in the muon beam at Fermilab. The MMS is a magnetized iron target/spectrometer/calorimeter which provides 5.61 kg/cm 2 of target, 9% momentum resolution on scattered muons, and a direct measure of total hadronic energy with resolution sigma/sub nu/ = 1.4√nu(GeV). In the distributed target, the average beam energies at the interaction are 88.0 and 209 GeV. Using the known form of the radiatively-corrected electromagnetic cross section, we extract the structure function F 2 (x,Q 2 ) with a typical precision of 2% over the range 5 2 2 /c 2 . We compare our measurements to the predictions of lowest order quantum chromodynamics (QCD) and find a best fit value of the QCD scale parameter Λ/sub LO/ = 230 +- 40/sup stat/ +- 80/sup syst/ MeV/c, assuming R = 0 and without applying Fermi motion corrections. Comparing the cross sections at the two beam energies, we measure R = -0.06 +- 0.06/sup stat/ +- 0.11/sup syst/. Our measurements show qualitative agreement with QCD, but quantitative comparison is hampered by phenomenological uncertainties. The experimental situation is quite good, with substantial agreement between our measurements and those of others. 86 references

  12. Response function measurement of plastic scintillator for high energy neutrons

    International Nuclear Information System (INIS)

    Sanami, Toshiya; Ban, Syuichi; Takahashi, Kazutoshi; Takada, Masashi

    2003-01-01

    The response function and detection efficiency of 2''φ x 2''L plastic (PilotU) and NE213 liquid (2''NE213) scintillators, which were used for the measurement of secondary neutrons from high energy electron induced reactions, were measured at Heavy Ion Medical Accelerator in Chiba (HIMAC). High energy neutrons were produced via 400 MeV/n C beam bombardment on a thick graphite target. The detectors were placed at 15 deg with respect to C beam axis, 5 m away from the target. As standard, a 5''φ x 5''L NE213 liquid scintillator (5''NE213) was also placed at same position. Neutron energy was determined by the time-of-flight method with the beam pickup scintillator in front of the target. In front of the detectors, veto scintillators were placed to remove charged particle events. All detector signals were corrected with list mode event by event. We deduce neutron spectrum for each detectors. The efficiency curves for pilotU and 2''NE213 were determined on the bases of 5 N E213 neutron spectrum and its efficiency calculated by CECIL code. (author)

  13. E1 and M1 strength functions at low energy

    Science.gov (United States)

    Schwengner, Ronald; Massarczyk, Ralph; Bemmerer, Daniel; Beyer, Roland; Junghans, Arnd R.; Kögler, Toni; Rusev, Gencho; Tonchev, Anton P.; Tornow, Werner; Wagner, Andreas

    2017-09-01

    We report photon-scattering experiments using bremsstrahlung at the γELBE facility of Helmholtz-Zentrum Dresden-Rossendorf and using quasi-monoenergetic, polarized γ beams at the HIγS facility of the Triangle Universities Nuclear Laboratory in Durham. To deduce the photoabsorption cross sections at high excitation energy and high level density, unresolved strength in the quasicontinuum of nuclear states has been taken into account. In the analysis of the spectra measured by using bremsstrahlung at γELBE, we perform simulations of statistical γ-ray cascades using the code γDEX to estimate intensities of inelastic transitions to low-lying excited states. Simulated average branching ratios are compared with model-independent branching ratios obtained from spectra measured by using monoenergetic γ beams at HIγS. E1 strength in the energy region of the pygmy dipole resonance is discussed in nuclei around mass 90 and in xenon isotopes. M1 strength in the region of the spin-flip resonance is also considered for xenon isotopes. The dipole strength function of 74Ge deduced from γELBE experiments is compared with the one obtained from experiments at the Oslo Cyclotron Laboratory. The low-energy upbend seen in the Oslo data is interpreted as M1 strength on the basis of shell-model calculations.

  14. E1 and M1 strength functions at low energy

    Directory of Open Access Journals (Sweden)

    Schwengner Ronald

    2017-01-01

    Full Text Available We report photon-scattering experiments using bremsstrahlung at the γELBE facility of Helmholtz-Zentrum Dresden-Rossendorf and using quasi-monoenergetic, polarized γ beams at the HIγS facility of the Triangle Universities Nuclear Laboratory in Durham. To deduce the photoabsorption cross sections at high excitation energy and high level density, unresolved strength in the quasicontinuum of nuclear states has been taken into account. In the analysis of the spectra measured by using bremsstrahlung at γELBE, we perform simulations of statistical γ-ray cascades using the code γDEX to estimate intensities of inelastic transitions to low-lying excited states. Simulated average branching ratios are compared with model-independent branching ratios obtained from spectra measured by using monoenergetic γ beams at HIγS. E1 strength in the energy region of the pygmy dipole resonance is discussed in nuclei around mass 90 and in xenon isotopes. M1 strength in the region of the spin-flip resonance is also considered for xenon isotopes. The dipole strength function of 74Ge deduced from γELBE experiments is compared with the one obtained from experiments at the Oslo Cyclotron Laboratory. The low-energy upbend seen in the Oslo data is interpreted as M1 strength on the basis of shell-model calculations.

  15. Use of probability tables for propagating uncertainties in neutronics

    International Nuclear Information System (INIS)

    Coste-Delclaux, M.; Diop, C.M.; Lahaye, S.

    2017-01-01

    Highlights: • Moment-based probability table formalism is described. • Representation by probability tables of any uncertainty distribution is established. • Multiband equations for two kinds of uncertainty propagation problems are solved. • Numerical examples are provided and validated against Monte Carlo simulations. - Abstract: Probability tables are a generic tool that allows representing any random variable whose probability density function is known. In the field of nuclear reactor physics, this tool is currently used to represent the variation of cross-sections versus energy (neutron transport codes TRIPOLI4®, MCNP, APOLLO2, APOLLO3®, ECCO/ERANOS…). In the present article we show how we can propagate uncertainties, thanks to a probability table representation, through two simple physical problems: an eigenvalue problem (neutron multiplication factor) and a depletion problem.

  16. Introduction to probability theory with contemporary applications

    CERN Document Server

    Helms, Lester L

    2010-01-01

    This introduction to probability theory transforms a highly abstract subject into a series of coherent concepts. Its extensive discussions and clear examples, written in plain language, expose students to the rules and methods of probability. Suitable for an introductory probability course, this volume requires abstract and conceptual thinking skills and a background in calculus.Topics include classical probability, set theory, axioms, probability functions, random and independent random variables, expected values, and covariance and correlations. Additional subjects include stochastic process

  17. Effects of combined dimension reduction and tabulation on the simulations of a turbulent premixed flame using a large-eddy simulation/probability density function method

    Science.gov (United States)

    Kim, Jeonglae; Pope, Stephen B.

    2014-05-01

    A turbulent lean-premixed propane-air flame stabilised by a triangular cylinder as a flame-holder is simulated to assess the accuracy and computational efficiency of combined dimension reduction and tabulation of chemistry. The computational condition matches the Volvo rig experiments. For the reactive simulation, the Lagrangian Large-Eddy Simulation/Probability Density Function (LES/PDF) formulation is used. A novel two-way coupling approach between LES and PDF is applied to obtain resolved density to reduce its statistical fluctuations. Composition mixing is evaluated by the modified Interaction-by-Exchange with the Mean (IEM) model. A baseline case uses In Situ Adaptive Tabulation (ISAT) to calculate chemical reactions efficiently. Its results demonstrate good agreement with the experimental measurements in turbulence statistics, temperature, and minor species mass fractions. For dimension reduction, 11 and 16 represented species are chosen and a variant of Rate Controlled Constrained Equilibrium (RCCE) is applied in conjunction with ISAT to each case. All the quantities in the comparison are indistinguishable from the baseline results using ISAT only. The combined use of RCCE/ISAT reduces the computational time for chemical reaction by more than 50%. However, for the current turbulent premixed flame, chemical reaction takes only a minor portion of the overall computational cost, in contrast to non-premixed flame simulations using LES/PDF, presumably due to the restricted manifold of purely premixed flame in the composition space. Instead, composition mixing is the major contributor to cost reduction since the mean-drift term, which is computationally expensive, is computed for the reduced representation. Overall, a reduction of more than 15% in the computational cost is obtained.

  18. Converged three-dimensional quantum mechanical reaction probabilities for the F + H2 reaction on a potential energy surface with realistic entrance and exit channels and comparisons to results for three other surfaces

    Science.gov (United States)

    Lynch, Gillian C.; Halvick, Philippe; Zhao, Meishan; Truhlar, Donald G.; Yu, Chin-Hui; Kouri, Donald J.; Schwenke, David W.

    1991-01-01

    Accurate three-dimensional quantum mechanical reaction probabilities are presented for the reaction F + H2 yields HF + H on the new global potential energy surface 5SEC for total angular momentum J = 0 over a range of translational energies from 0.15 to 4.6 kcal/mol. It is found that the v-prime = 3 HF vibrational product state has a threshold as low as for v-prime = 2.

  19. Harris functional and related methods for calculating total energies in density-functional theory

    International Nuclear Information System (INIS)

    Averill, F.W.; Painter, G.S.

    1990-01-01

    The simplified energy functional of Harris has given results of useful accuracy for systems well outside the limits of weakly interacting fragments for which the method was originally proposed. In the present study, we discuss the source of the frequent good agreement of the Harris energy with full Kohn-Sham self-consistent results. A procedure is described for extending the applicability of the scheme to more strongly interacting systems by going beyond the frozen-atom fragment approximation. A gradient-force expression is derived, based on the Harris functional, which accounts for errors in the fragment charge representation. Results are presented for some diatomic molecules, illustrating the points of this study

  20. Building A Universal Nuclear Energy Density Functional (UNEDF)

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, Joe [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Furnstahl, Dick [The Ohio State Univ., Columbus, OH (United States); Horoi, Mihai [Central Michigan Univ., Mount Pleasant, MI (United States); Lusk, Rusty [Argonne National Lab. (ANL), Argonne, IL (United States); Nazarewicz, Witek [Univ. of Tennessee, Knoxville, TN (United States); Ng, Esmond [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Thompson, Ian [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Vary, James [Iowa State Univ., Ames, IA (United States)

    2012-09-30

    During the period of Dec. 1 2006 - Jun. 30, 2012, the UNEDF collaboration carried out a comprehensive study of all nuclei, based on the most accurate knowledge of the strong nuclear interaction, the most reliable theoretical approaches, the most advanced algorithms, and extensive computational resources, with a view towards scaling to the petaflop platforms and beyond. The long-term vision initiated with UNEDF is to arrive at a comprehensive, quantitative, and unified description of nuclei and their reactions, grounded in the fundamental interactions between the constituent nucleons. We seek to replace current phenomenological models of nuclear structure and reactions with a well-founded microscopic theory that delivers maximum predictive power with well-quantified uncertainties. Specifically, the mission of this project has been three-fold: first, to find an optimal energy density functional (EDF) using all our knowledge of the nucleonic Hamiltonian and basic nuclear properties; second, to apply the EDF theory and its extensions to validate the functional using all the available relevant nuclear structure and reaction data; third, to apply the validated theory to properties of interest that cannot be measured, in particular the properties needed for reaction theory. The main physics areas of UNEDF, defined at the beginning of the project, were: ab initio structure; ab initio functionals; DFT applications; DFT extensions; reactions.

  1. Electron energy distribution function control in gas discharge plasmas

    International Nuclear Information System (INIS)

    Godyak, V. A.

    2013-01-01

    The formation of the electron energy distribution function (EEDF) and electron temperature in low temperature gas discharge plasmas is analyzed in frames of local and non-local electron kinetics. It is shown, that contrary to the local case, typical for plasma in uniform electric field, there is the possibility for EEDF modification, at the condition of non-local electron kinetics in strongly non-uniform electric fields. Such conditions “naturally” occur in some self-organized steady state dc and rf discharge plasmas, and they suggest the variety of artificial methods for EEDF modification. EEDF modification and electron temperature control in non-equilibrium conditions occurring naturally and those stimulated by different kinds of plasma disturbances are illustrated with numerous experiments. The necessary conditions for EEDF modification in gas discharge plasmas are formulated

  2. Theoretical characterization of electron energy distribution function in RF plasmas

    International Nuclear Information System (INIS)

    Capitelli, M.; Capriati, G.; Dilonardo, M.; Gorse, C.; Longo, S.

    1993-01-01

    Different methods for the modeling of low-temperature plasmas of both technological and fundamental interest are discussed. The main concept of all these models is the electron energy distribution function (eedf) which is necessary to calculate the rate coefficients for any chemical reaction involving electrons. Results of eedf calculations in homogeneous SF 6 and SiH 4 plasmas are discussed based on solution of the time-dependent Boltzmann equation. The space-dependent eedf in an RF discharge in He is calculated taking into account the sheath oscillations by a Monte Carlo model assuming the plasma heating mechanism and the electric field determined by using a fluid model. The need to take into account the ambipolar diffusion of electrons in RF discharge modeling is stressed. A self-consistent model based on coupling the equations of the fluid model and the chemical kinetics ones is presented. (orig.)

  3. Measurement of the nucleon structure function using high energy muons

    Energy Technology Data Exchange (ETDEWEB)

    Meyers, P.D.

    1983-12-01

    We have measured the inclusive deep inelastic scattering of muons on nucleons in iron using beams of 93 and 215 GeV muons. To perform this measurement, we have built and operated the Multimuon Spectrometer (MMS) in the muon beam at Fermilab. The MMS is a magnetized iron target/spectrometer/calorimeter which provides 5.61 kg/cm/sup 2/ of target, 9% momentum resolution on scattered muons, and a direct measure of total hadronic energy with resolution sigma/sub nu/ = 1.4..sqrt..nu(GeV). In the distributed target, the average beam energies at the interaction are 88.0 and 209 GeV. Using the known form of the radiatively-corrected electromagnetic cross section, we extract the structure function F/sub 2/(x,Q/sup 2/) with a typical precision of 2% over the range 5 < Q/sup 2/ < 200 GeV/sup 2//c/sup 2/. We compare our measurements to the predictions of lowest order quantum chromodynamics (QCD) and find a best fit value of the QCD scale parameter ..lambda../sub LO/ = 230 +- 40/sup stat/ +- 80/sup syst/ MeV/c, assuming R = 0 and without applying Fermi motion corrections. Comparing the cross sections at the two beam energies, we measure R = -0.06 +- 0.06/sup stat/ +- 0.11/sup syst/. Our measurements show qualitative agreement with QCD, but quantitative comparison is hampered by phenomenological uncertainties. The experimental situation is quite good, with substantial agreement between our measurements and those of others. 86 references.

  4. Pressure effects on membrane-based functions and energy ...

    African Journals Online (AJOL)

    1997-11-06

    Nov 6, 1997 ... there are few laboratories working with pressure on whole animals, probably .... after four weeks under pressure, nucleotide muscle content, enzyme .... compressed fish show levels which are not significantly dif- ferent from ...

  5. Probability theory a foundational course

    CERN Document Server

    Pakshirajan, R P

    2013-01-01

    This book shares the dictum of J. L. Doob in treating Probability Theory as a branch of Measure Theory and establishes this relation early. Probability measures in product spaces are introduced right at the start by way of laying the ground work to later claim the existence of stochastic processes with prescribed finite dimensional distributions. Other topics analysed in the book include supports of probability measures, zero-one laws in product measure spaces, Erdos-Kac invariance principle, functional central limit theorem and functional law of the iterated logarithm for independent variables, Skorohod embedding, and the use of analytic functions of a complex variable in the study of geometric ergodicity in Markov chains. This book is offered as a text book for students pursuing graduate programs in Mathematics and or Statistics. The book aims to help the teacher present the theory with ease, and to help the student sustain his interest and joy in learning the subject.

  6. Philosophical theories of probability

    CERN Document Server

    Gillies, Donald

    2000-01-01

    The Twentieth Century has seen a dramatic rise in the use of probability and statistics in almost all fields of research. This has stimulated many new philosophical ideas on probability. Philosophical Theories of Probability is the first book to present a clear, comprehensive and systematic account of these various theories and to explain how they relate to one another. Gillies also offers a distinctive version of the propensity theory of probability, and the intersubjective interpretation, which develops the subjective theory.

  7. Studies on functional polymer films utilizing low energy electron beam

    International Nuclear Information System (INIS)

    Ando, Masayuki

    1992-01-01

    Also in adhesives and tackifiers, with the expansion of the fields of application, the required characteristics have become high grade and complex. As one of them, the instantaneous hardening of adhesives can be taken up. In the field of lamination works, the low energy type electron beam accelerators having the linear filament of accelerating voltage below 300 kV were developed in 1970s, and the interest in the development of electron beam-handened adhesives has heightend. The authors have carried out research aiming at heightening the functions of the polymer films obtained by electron beam hardening reaction, and developed the adhesives. In this report, the features of electron beam hardening reaction, the structure and properties of electron beam-hardened polymer films and the molecular design of electron beam-hardened monomer oligomers are described. The feature of electron beam hardening reaction is the cross-linking of high degree as the structure of oligomers is maintained. By controlling the structure at the time of electron beam hardening, the heightening of the functions of electron beam-hardened polymer films is feasible. (K.I.)

  8. Non-Archimedean Probability

    NARCIS (Netherlands)

    Benci, Vieri; Horsten, Leon; Wenmackers, Sylvia

    We propose an alternative approach to probability theory closely related to the framework of numerosity theory: non-Archimedean probability (NAP). In our approach, unlike in classical probability theory, all subsets of an infinite sample space are measurable and only the empty set gets assigned

  9. Interpretations of probability

    CERN Document Server

    Khrennikov, Andrei

    2009-01-01

    This is the first fundamental book devoted to non-Kolmogorov probability models. It provides a mathematical theory of negative probabilities, with numerous applications to quantum physics, information theory, complexity, biology and psychology. The book also presents an interesting model of cognitive information reality with flows of information probabilities, describing the process of thinking, social, and psychological phenomena.

  10. Free probability and random matrices

    CERN Document Server

    Mingo, James A

    2017-01-01

    This volume opens the world of free probability to a wide variety of readers. From its roots in the theory of operator algebras, free probability has intertwined with non-crossing partitions, random matrices, applications in wireless communications, representation theory of large groups, quantum groups, the invariant subspace problem, large deviations, subfactors, and beyond. This book puts a special emphasis on the relation of free probability to random matrices, but also touches upon the operator algebraic, combinatorial, and analytic aspects of the theory. The book serves as a combination textbook/research monograph, with self-contained chapters, exercises scattered throughout the text, and coverage of important ongoing progress of the theory. It will appeal to graduate students and all mathematicians interested in random matrices and free probability from the point of view of operator algebras, combinatorics, analytic functions, or applications in engineering and statistical physics.

  11. Dimensional oscillation. A fast variation of energy embedding gives good results with the AMBER potential energy function.

    Science.gov (United States)

    Snow, M E; Crippen, G M

    1991-08-01

    The structure of the AMBER potential energy surface of the cyclic tetrapeptide cyclotetrasarcosyl is analyzed as a function of the dimensionality of coordinate space. It is found that the number of local energy minima decreases as the dimensionality of the space increases until some limit at which point equipotential subspaces appear. The applicability of energy embedding methods to finding global energy minima in this type of energy-conformation space is explored. Dimensional oscillation, a computationally fast variant of energy embedding is introduced and found to sample conformation space widely and to do a good job of finding global and near-global energy minima.

  12. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    Science.gov (United States)

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Physics as a function of energy and luminosity

    International Nuclear Information System (INIS)

    Ellis, J.

    1984-01-01

    In this paper, a new physics in the range of mass up to TeV region is discussed. Most of the discussion concern hadron-hadron (hh) colliders, and also electron-positron colliders are discussed. The cross-sections for new particle production in hh colliders have the general Drell-Yan form, in which the differential luminosity for the collision of partons is included. The formulas with the parton distribution scaled up from present energy using the Altarelli-Parisi equations may be approximately correct within a factor of 2 for the production of particles. Some typical parton-parton luminosity functions for proton-proton and proton-antiproton collisions are presented. From the consideration of luminosity, it can be said that the pp colliders are to be preferred. The case studies of some of the possible new physics discussed by Zakharov, mainly on Higgs bosons and supersymmetric particles, but also a few remarks about technicolor are presented. It seems possible to detect technicolor at a large hh collider. The physics reaches of different possible hh colliders are summarized in tables. In the tables, the observable production of Higgses up to 1 TeV in mass, the observable masses for gluinos (squarks) and the technicolor observability are shown. The cleanliness of electron-positron colliders compared to hadron-hadron colliders is pled, a guess is given as to the appropriate conversion factors between the energy in the electron-positron and hh collisions, the complementarity of electron-positron and hh colliders is urged, and it is argued that a rational mix of world accelerators would include both. (Kato, T.)

  14. Croatian Energy Policy as Function of Regional Development and Employment

    International Nuclear Information System (INIS)

    Potocnik, V.

    2006-01-01

    The Republic of Croatia has modest proven fossil fuels (oil and gas) reserves and relatively abundant renewable energy potential (wind, solar, biomass, geothermal, hydro), distributed mainly in less developed regions of Croatia. The Croatian energy system is excessively dependent on expensive oil and natural gas (80% of primary energy), compared to the European Union (61%), and the world average (58%). Approximately 60% of total energy is imported, which considerably contributes to the country's very high foreign trade deficit and foreign debt. Putting into focus of the Croatian energy policy the improvement of energy efficiency and implementation of renewable energies would significantly increase opportunities for mitigating rather wide regional development disparities and high unemployment rates, at the same time reducing energy import, foreign trade deficit and foreign debt, and contributing to energy security as a part of the national security.(author)

  15. Nash equilibrium with lower probabilities

    DEFF Research Database (Denmark)

    Groes, Ebbe; Jacobsen, Hans Jørgen; Sloth, Birgitte

    1998-01-01

    We generalize the concept of Nash equilibrium in mixed strategies for strategic form games to allow for ambiguity in the players' expectations. In contrast to other contributions, we model ambiguity by means of so-called lower probability measures or belief functions, which makes it possible...

  16. Uncertainties propagation and global sensitivity analysis of the frequency response function of piezoelectric energy harvesters

    Science.gov (United States)

    Ruiz, Rafael O.; Meruane, Viviana

    2017-06-01

    The goal of this work is to describe a framework to propagate uncertainties in piezoelectric energy harvesters (PEHs). These uncertainties are related to the incomplete knowledge of the model parameters. The framework presented could be employed to conduct prior robust stochastic predictions. The prior analysis assumes a known probability density function for the uncertain variables and propagates the uncertainties to the output voltage. The framework is particularized to evaluate the behavior of the frequency response functions (FRFs) in PEHs, while its implementation is illustrated by the use of different unimorph and bimorph PEHs subjected to different scenarios: free of uncertainties, common uncertainties, and uncertainties as a product of imperfect clamping. The common variability associated with the PEH parameters are tabulated and reported. A global sensitivity analysis is conducted to identify the Sobol indices. Results indicate that the elastic modulus, density, and thickness of the piezoelectric layer are the most relevant parameters of the output variability. The importance of including the model parameter uncertainties in the estimation of the FRFs is revealed. In this sense, the present framework constitutes a powerful tool in the robust design and prediction of PEH performance.

  17. The quantum probability calculus

    International Nuclear Information System (INIS)

    Jauch, J.M.

    1976-01-01

    The Wigner anomaly (1932) for the joint distribution of noncompatible observables is an indication that the classical probability calculus is not applicable for quantum probabilities. It should, therefore, be replaced by another, more general calculus, which is specifically adapted to quantal systems. In this article this calculus is exhibited and its mathematical axioms and the definitions of the basic concepts such as probability field, random variable, and expectation values are given. (B.R.H)

  18. Changes in cotton gin energy consumption apportioned by ten functions

    Science.gov (United States)

    The public is concerned about air quality and sustainability. Cotton producers, gin owners and plant managers are concerned about rising energy prices. Both have an interest in cotton gin energy consumption trends. Changes in cotton gins’ energy consumption over the past fifty years, a period of ...

  19. Association between Energy Availability and Menstrual Function in ...

    African Journals Online (AJOL)

    Energy intake (EI) minus exercise energy expenditure (EEE) normalized to fat free mass (FFM) determined EA. EI was determined through weight of all food and liquid consumed over three consecutive days. EEE was determined after isolating and deducting energy expended in exercise or physical activity above lifestyle ...

  20. Recycling Energy to Restore Impaired Ankle Function during Human Walking

    NARCIS (Netherlands)

    Collins, S.H.; Kuo, A.D.

    2010-01-01

    Background: Humans normally dissipate significant energy during walking, largely at the transitions between steps. The ankle then acts to restore energy during push-off, which may be the reason that ankle impairment nearly always leads to poorer walking economy. The replacement of lost energy is

  1. Handbook of probability

    CERN Document Server

    Florescu, Ionut

    2013-01-01

    THE COMPLETE COLLECTION NECESSARY FOR A CONCRETE UNDERSTANDING OF PROBABILITY Written in a clear, accessible, and comprehensive manner, the Handbook of Probability presents the fundamentals of probability with an emphasis on the balance of theory, application, and methodology. Utilizing basic examples throughout, the handbook expertly transitions between concepts and practice to allow readers an inclusive introduction to the field of probability. The book provides a useful format with self-contained chapters, allowing the reader easy and quick reference. Each chapter includes an introductio

  2. Evidence that multiple genetic variants of MC4R play a functional role in the regulation of energy expenditure and appetite in Hispanic children1234

    Science.gov (United States)

    Cole, Shelley A; Voruganti, V Saroja; Cai, Guowen; Haack, Karin; Kent, Jack W; Blangero, John; Comuzzie, Anthony G; McPherson, John D; Gibbs, Richard A

    2010-01-01

    Background: Melanocortin-4-receptor (MC4R) haploinsufficiency is the most common form of monogenic obesity; however, the frequency of MC4R variants and their functional effects in general populations remain uncertain. Objective: The aim was to identify and characterize the effects of MC4R variants in Hispanic children. Design: MC4R was resequenced in 376 parents, and the identified single nucleotide polymorphisms (SNPs) were genotyped in 613 parents and 1016 children from the Viva la Familia cohort. Measured genotype analysis (MGA) tested associations between SNPs and phenotypes. Bayesian quantitative trait nucleotide (BQTN) analysis was used to infer the most likely functional polymorphisms influencing obesity-related traits. Results: Seven rare SNPs in coding and 18 SNPs in flanking regions of MC4R were identified. MGA showed suggestive associations between MC4R variants and body size, adiposity, glucose, insulin, leptin, ghrelin, energy expenditure, physical activity, and food intake. BQTN analysis identified SNP 1704 in a predicted micro-RNA target sequence in the downstream flanking region of MC4R as a strong, probable functional variant influencing total, sedentary, and moderate activities with posterior probabilities of 1.0. SNP 2132 was identified as a variant with a high probability (1.0) of exerting a functional effect on total energy expenditure and sleeping metabolic rate. SNP rs34114122 was selected as having likely functional effects on the appetite hormone ghrelin, with a posterior probability of 0.81. Conclusion: This comprehensive investigation provides strong evidence that MC4R genetic variants are likely to play a functional role in the regulation of weight, not only through energy intake but through energy expenditure. PMID:19889825

  3. Recycling energy to restore impaired ankle function during human walking.

    Directory of Open Access Journals (Sweden)

    Steven H Collins

    Full Text Available BACKGROUND: Humans normally dissipate significant energy during walking, largely at the transitions between steps. The ankle then acts to restore energy during push-off, which may be the reason that ankle impairment nearly always leads to poorer walking economy. The replacement of lost energy is necessary for steady gait, in which mechanical energy is constant on average, external dissipation is negligible, and no net work is performed over a stride. However, dissipation and replacement by muscles might not be necessary if energy were instead captured and reused by an assistive device. METHODOLOGY/PRINCIPAL FINDINGS: We developed a microprocessor-controlled artificial foot that captures some of the energy that is normally dissipated by the leg and "recycles" it as positive ankle work. In tests on subjects walking with an artificially-impaired ankle, a conventional prosthesis reduced ankle push-off work and increased net metabolic energy expenditure by 23% compared to normal walking. Energy recycling restored ankle push-off to normal and reduced the net metabolic energy penalty to 14%. CONCLUSIONS/SIGNIFICANCE: These results suggest that reduced ankle push-off contributes to the increased metabolic energy expenditure accompanying ankle impairments, and demonstrate that energy recycling can be used to reduce such cost.

  4. Introduction to probability

    CERN Document Server

    Freund, John E

    1993-01-01

    Thorough, lucid coverage of permutations and factorials, probabilities and odds, frequency interpretation, mathematical expectation, decision making, postulates of probability, rule of elimination, binomial distribution, geometric distribution, standard deviation, law of large numbers, and much more. Exercises with some solutions. Summary. Bibliography. Includes 42 black-and-white illustrations. 1973 edition.

  5. Probability, Nondeterminism and Concurrency

    DEFF Research Database (Denmark)

    Varacca, Daniele

    Nondeterminism is modelled in domain theory by the notion of a powerdomain, while probability is modelled by that of the probabilistic powerdomain. Some problems arise when we want to combine them in order to model computation in which both nondeterminism and probability are present. In particula...

  6. Janus-faced probability

    CERN Document Server

    Rocchi, Paolo

    2014-01-01

    The problem of probability interpretation was long overlooked before exploding in the 20th century, when the frequentist and subjectivist schools formalized two conflicting conceptions of probability. Beyond the radical followers of the two schools, a circle of pluralist thinkers tends to reconcile the opposing concepts. The author uses two theorems in order to prove that the various interpretations of probability do not come into opposition and can be used in different contexts. The goal here is to clarify the multifold nature of probability by means of a purely mathematical approach and to show how philosophical arguments can only serve to deepen actual intellectual contrasts. The book can be considered as one of the most important contributions in the analysis of probability interpretation in the last 10-15 years.

  7. Some consideration on the relevance of concepts based on the probability distribution of words reminded by stimulus terms concerning atomic energy and radiation utilization

    International Nuclear Information System (INIS)

    Urabe, Itsumasa

    2002-01-01

    The relevance of concepts brought to mind by stimulus terms concerning atomic energy and radiation utilization has been investigated to learn how people understand the present status of nuclear technology. The relevance of concepts was defined as the frequency distribution of words that came to mind immediately after seeing selected terms needed for present-day life as well as for nuclear engineering. An analysis of knowledge structure shows that a concept of atomic energy has a close relation with that of electric power generation; an understanding of nuclear power utilization may be promoted in relation to an understanding of energy and environmental problems because the concepts of energy, atomic energy, electric power generation, and natural environment have closer relations with one another; a concept of radiation has various relations with harmful radiological health effects, but little relation with industrial, agricultural, and other beneficial uses except of nuclear power generation or medical applications. It also became clear from the investigation that studies on natural radiation may be important to promote an understanding of radiation utilization because a concept of the natural environment does not yet relate to that of natural radiation. (author)

  8. Symmetry Energy as a Function of Density and Mass

    International Nuclear Information System (INIS)

    Danielewicz, Pawel; Lee, Jenny

    2007-01-01

    Energy in nuclear matter is, in practice, completely characterized at different densities and asymmetries, when the density dependencies of symmetry energy and of energy of symmetric matter are specified. The density dependence of the symmetry energy at subnormal densities produces mass dependence of nuclear symmetry coefficient and, thus, can be constrained by that latter dependence. We deduce values of the mass dependent symmetry coefficients, by using excitation energies to isobaric analog states. The coefficient systematic, for intermediate and high masses, is well described in terms of the symmetry coefficient values of a a V = (31.5-33.5) MeV for the volume coefficient and a a S = (9-12) MeV for the surface coefficient. These two further correspond to the parameter values describing density dependence of symmetry energy, of L∼95 MeV and K sym ∼25 MeV

  9. Local functional derivative of the total energy and the shell structure in atoms and molecules

    NARCIS (Netherlands)

    Pino, R.; Markvoort, Albert. J.; Santen, van R.A.; Hilbers, P.A.J.

    2003-01-01

    The full and local Thomas–Fermi–Dirac energy functional derivatives are evaluated at Hartree–Fock densities for several atoms and molecules. These functions are interpreted as local chemical potentials and related mainly to kinetic energy functional derivatives. They are able to reveal the shell

  10. Impact parameter dependence of inner-shell ionization probabilities

    International Nuclear Information System (INIS)

    Cocke, C.L.

    1974-01-01

    The probability for ionization of an inner shell of a target atom by a heavy charged projectile is a sensitive function of the impact parameter characterizing the collision. This probability can be measured experimentally by detecting the x-ray resulting from radiative filling of the inner shell in coincidence with the projectile scattered at a determined angle, and by using the scattering angle to deduce the impact parameter. It is conjectured that the functional dependence of the ionization probability may be a more sensitive probe of the ionization mechanism than is a total cross section measurement. Experimental results for the K-shell ionization of both solid and gas targets by oxygen, carbon and fluorine projectiles in the MeV/amu energy range will be presented, and their use in illuminating the inelastic collision process discussed

  11. Probability elements of the mathematical theory

    CERN Document Server

    Heathcote, C R

    2000-01-01

    Designed for students studying mathematical statistics and probability after completing a course in calculus and real variables, this text deals with basic notions of probability spaces, random variables, distribution functions and generating functions, as well as joint distributions and the convergence properties of sequences of random variables. Includes worked examples and over 250 exercises with solutions.

  12. Approximation methods in probability theory

    CERN Document Server

    Čekanavičius, Vydas

    2016-01-01

    This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.

  13. Atom probe tomography simulations and density functional theory calculations of bonding energies in Cu3Au

    KAUST Repository

    Boll, Torben; Zhu, Zhiyong; Al-Kassab, Talaat; Schwingenschlö gl, Udo

    2012-01-01

    In this article the Cu-Au binding energy in Cu3Au is determined by comparing experimental atom probe tomography (APT) results to simulations. The resulting bonding energy is supported by density functional theory calculations. The APT simulations

  14. Energy levels, oscillator strengths, line strengths, and transition probabilities in Si-like ions of La XLIII, Er LIV, Tm LV, and Yb LVI

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Zhan-Bin, E-mail: chenzb008@qq.com [College of Science, National University of Defense Technology, Changsha, Hunan 410073 (China); Ma, Kun [School of Information Engineering, Huangshan University, Huangshan 245041 (China); Wang, Hong-Jian [Chongqing Key Laboratory for Design and Control of Manufacturing Equipment, Chongqing Technology and Business University, Chongqing 40067 (China); Wang, Kai, E-mail: wangkai@hbu.edu.cn [Hebei Key Lab of Optic-electronic Information and Materials, The College of Physics Science and Technology, Hebei University, Baoding 071002 (China); Liu, Xiao-Bin [Department of Physics, Tianshui Normal University, Tianshui 741001 (China); Zeng, Jiao-Long [College of Science, National University of Defense Technology, Changsha, Hunan 410073 (China)

    2017-01-15

    Detailed calculations using the multi-configuration Dirac–Fock (MCDF) method are carried out for the lowest 64 fine-structure levels of the 3s{sup 2}3p{sup 2}, 3s{sup 2}3p3d, 3s3p{sup 3}, 3s3p{sup 2}3d, 3s{sup 2}3d{sup 2}, and 3p{sup 4} configurations in Si-like ions of La XLIII, Er LIV, Tm LV, and Yb LVI. Energies, oscillator strengths, wavelengths, line strengths, and radiative electric dipole transition rates are given for all ions. A parallel calculation using the many-body perturbation theory (MBPT) method is also carried out to assess the present energy levels accuracy. Comparisons are performed between these two sets of energy levels, as well as with other available results, showing that they are in good agreement with each other within 0.5%. These high accuracy results can be used to the modeling and the interpretation of astrophysical objects and fusion plasmas. - Highlights: • Energy levels and E1 transition rates of Si-like ions are presented. • Breit interaction and Quantum Electrodynamics effects are discussed. • Present results should be useful in the astrophysical application and plasma modeling.

  15. Functional Modeling of Perspectives on the Example of Electric Energy Systems

    DEFF Research Database (Denmark)

    Heussen, Kai

    2009-01-01

    The integration of energy systems is a proven approach to gain higher overall energy efficiency. Invariably, this integration will come with increasing technical complexity through the diversification of energy resources and their functionality. With the integration of more fluctuating renewable ...... which enables a reflection on system integration requirements independent of particular technologies. The results are illustrated on examples related to electric energy systems.......The integration of energy systems is a proven approach to gain higher overall energy efficiency. Invariably, this integration will come with increasing technical complexity through the diversification of energy resources and their functionality. With the integration of more fluctuating renewable...... energies higher system flexibility will also be necessary. One of the challenges ahead is the design of control architecture to enable the flexibility and to handle the diversity. This paper presents an approach to model heterogeneous energy systems and their control on the basis of purpose and functions...

  16. Probability and Measure

    CERN Document Server

    Billingsley, Patrick

    2012-01-01

    Praise for the Third Edition "It is, as far as I'm concerned, among the best books in math ever written....if you are a mathematician and want to have the top reference in probability, this is it." (Amazon.com, January 2006) A complete and comprehensive classic in probability and measure theory Probability and Measure, Anniversary Edition by Patrick Billingsley celebrates the achievements and advancements that have made this book a classic in its field for the past 35 years. Now re-issued in a new style and format, but with the reliable content that the third edition was revered for, this

  17. The concept of probability

    International Nuclear Information System (INIS)

    Bitsakis, E.I.; Nicolaides, C.A.

    1989-01-01

    The concept of probability is now, and always has been, central to the debate on the interpretation of quantum mechanics. Furthermore, probability permeates all of science, as well as our every day life. The papers included in this volume, written by leading proponents of the ideas expressed, embrace a broad spectrum of thought and results: mathematical, physical epistemological, and experimental, both specific and general. The contributions are arranged in parts under the following headings: Following Schroedinger's thoughts; Probability and quantum mechanics; Aspects of the arguments on nonlocality; Bell's theorem and EPR correlations; Real or Gedanken experiments and their interpretation; Questions about irreversibility and stochasticity; and Epistemology, interpretation and culture. (author). refs.; figs.; tabs

  18. Analysis Of Functional Stability Of The Triphased Asynchronous Generator Used In Conversion Systems Of A Eolian Energy Into Electric Energy

    Directory of Open Access Journals (Sweden)

    Ion VONCILA

    2003-12-01

    Full Text Available This paper presents a study of the influence of the main perturbation agent over the functional stability of the triphased asynchronous generator (for the two alternative: with coiled and short circuit rotor, used for the conversion systems from a eolian energy into electric energy.

  19. Utilization Probability Map for Migrating Bald Eagles in Northeastern North America: A Tool for Siting Wind Energy Facilities and Other Flight Hazards.

    Directory of Open Access Journals (Sweden)

    Elizabeth K Mojica

    Full Text Available Collisions with anthropogenic structures are a significant and well documented source of mortality for avian species worldwide. The bald eagle (Haliaeetus leucocephalus is known to be vulnerable to collision with wind turbines and federal wind energy guidelines include an eagle risk assessment for new projects. To address the need for risk assessment, in this study, we 1 identified areas of northeastern North America utilized by migrating bald eagles, and 2 compared these with high wind-potential areas to identify potential risk of bald eagle collision with wind turbines. We captured and marked 17 resident and migrant bald eagles in the northern Chesapeake Bay between August 2007 and May 2009. We produced utilization distribution (UD surfaces for 132 individual migration tracks using a dynamic Brownian bridge movement model and combined these to create a population wide UD surface with a 1 km cell size. We found eagle migration movements were concentrated within two main corridors along the Appalachian Mountains and the Atlantic Coast. Of the 3,123 wind turbines ≥100 m in height in the study area, 38% were located in UD 20, and 31% in UD 40. In the United States portion of the study area, commercially viable wind power classes overlapped with only 2% of the UD category 20 (i.e., the areas of highest use by migrating eagles and 4% of UD category 40. This is encouraging because it suggests that wind energy development can still occur in the study area at sites that are most viable from a wind power perspective and are unlikely to cause significant mortality of migrating eagles. In siting new turbines, wind energy developers should avoid the high-use migration corridors (UD categories 20 & 40 and focus new wind energy projects on lower-risk areas (UD categories 60-100.

  20. Utilization Probability Map for Migrating Bald Eagles in Northeastern North America: A Tool for Siting Wind Energy Facilities and Other Flight Hazards.

    Science.gov (United States)

    Mojica, Elizabeth K; Watts, Bryan D; Turrin, Courtney L

    2016-01-01

    Collisions with anthropogenic structures are a significant and well documented source of mortality for avian species worldwide. The bald eagle (Haliaeetus leucocephalus) is known to be vulnerable to collision with wind turbines and federal wind energy guidelines include an eagle risk assessment for new projects. To address the need for risk assessment, in this study, we 1) identified areas of northeastern North America utilized by migrating bald eagles, and 2) compared these with high wind-potential areas to identify potential risk of bald eagle collision with wind turbines. We captured and marked 17 resident and migrant bald eagles in the northern Chesapeake Bay between August 2007 and May 2009. We produced utilization distribution (UD) surfaces for 132 individual migration tracks using a dynamic Brownian bridge movement model and combined these to create a population wide UD surface with a 1 km cell size. We found eagle migration movements were concentrated within two main corridors along the Appalachian Mountains and the Atlantic Coast. Of the 3,123 wind turbines ≥100 m in height in the study area, 38% were located in UD 20, and 31% in UD 40. In the United States portion of the study area, commercially viable wind power classes overlapped with only 2% of the UD category 20 (i.e., the areas of highest use by migrating eagles) and 4% of UD category 40. This is encouraging because it suggests that wind energy development can still occur in the study area at sites that are most viable from a wind power perspective and are unlikely to cause significant mortality of migrating eagles. In siting new turbines, wind energy developers should avoid the high-use migration corridors (UD categories 20 & 40) and focus new wind energy projects on lower-risk areas (UD categories 60-100).

  1. Concepts of probability theory

    CERN Document Server

    Pfeiffer, Paul E

    1979-01-01

    Using the Kolmogorov model, this intermediate-level text discusses random variables, probability distributions, mathematical expectation, random processes, more. For advanced undergraduates students of science, engineering, or math. Includes problems with answers and six appendixes. 1965 edition.

  2. Probability and Bayesian statistics

    CERN Document Server

    1987-01-01

    This book contains selected and refereed contributions to the "Inter­ national Symposium on Probability and Bayesian Statistics" which was orga­ nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa­ pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub­ jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...

  3. Probability and Statistical Inference

    OpenAIRE

    Prosper, Harrison B.

    2006-01-01

    These lectures introduce key concepts in probability and statistical inference at a level suitable for graduate students in particle physics. Our goal is to paint as vivid a picture as possible of the concepts covered.

  4. Probabilities in physics

    CERN Document Server

    Hartmann, Stephan

    2011-01-01

    Many results of modern physics--those of quantum mechanics, for instance--come in a probabilistic guise. But what do probabilistic statements in physics mean? Are probabilities matters of objective fact and part of the furniture of the world, as objectivists think? Or do they only express ignorance or belief, as Bayesians suggest? And how are probabilistic hypotheses justified and supported by empirical evidence? Finally, what does the probabilistic nature of physics imply for our understanding of the world? This volume is the first to provide a philosophical appraisal of probabilities in all of physics. Its main aim is to make sense of probabilistic statements as they occur in the various physical theories and models and to provide a plausible epistemology and metaphysics of probabilities. The essays collected here consider statistical physics, probabilistic modelling, and quantum mechanics, and critically assess the merits and disadvantages of objectivist and subjectivist views of probabilities in these fie...

  5. Probability an introduction

    CERN Document Server

    Grimmett, Geoffrey

    2014-01-01

    Probability is an area of mathematics of tremendous contemporary importance across all aspects of human endeavour. This book is a compact account of the basic features of probability and random processes at the level of first and second year mathematics undergraduates and Masters' students in cognate fields. It is suitable for a first course in probability, plus a follow-up course in random processes including Markov chains. A special feature is the authors' attention to rigorous mathematics: not everything is rigorous, but the need for rigour is explained at difficult junctures. The text is enriched by simple exercises, together with problems (with very brief hints) many of which are taken from final examinations at Cambridge and Oxford. The first eight chapters form a course in basic probability, being an account of events, random variables, and distributions - discrete and continuous random variables are treated separately - together with simple versions of the law of large numbers and the central limit th...

  6. Probability in physics

    CERN Document Server

    Hemmo, Meir

    2012-01-01

    What is the role and meaning of probability in physical theory, in particular in two of the most successful theories of our age, quantum physics and statistical mechanics? Laws once conceived as universal and deterministic, such as Newton‘s laws of motion, or the second law of thermodynamics, are replaced in these theories by inherently probabilistic laws. This collection of essays by some of the world‘s foremost experts presents an in-depth analysis of the meaning of probability in contemporary physics. Among the questions addressed are: How are probabilities defined? Are they objective or subjective? What is their  explanatory value? What are the differences between quantum and classical probabilities? The result is an informative and thought-provoking book for the scientifically inquisitive. 

  7. The energy rebound effects across China’s industrial sectors: An output distance function approach

    International Nuclear Information System (INIS)

    Li, Ke; Zhang, Ning; Liu, Yanchu

    2016-01-01

    Highlights: • Output distance function for the energy rebound effect is developed. • The aggregate energy rebound effect of China is 88.42%. • Investment-driven economic growth is not conducive to energy-saving. - Abstract: Improving energy efficiency sustainability is a target of the Chinese government. However, the effectiveness of energy conservation policy is affected by the energy rebound effect under which energy efficiency improvement reduces the effective price of energy services, thereby completely or partially offsetting the energy saved by efficiency improvement. Based on the output distance function, this paper develops an improved estimation model of the energy rebound effect, which is logically consistent with the quantities of energy savings and energy rebounds induced by technological progress. Results show that the aggregate energy rebound effect of 36 industrial sectors in China over 1998–2011 is 88.42%, which implies that most of the expected energy savings are mitigated. Investment-driven economic growth is not conducive to energy-saving and results in a strong energy rebound effect in the following year. The equipment and high-end manufacturing sectors have low levels of rebound effect, indicating that increasing the proportion of such firms in the total manufacturing sector can improve the performance of energy conservation. The high level and heterogeneity in rebound effects strongly suggest that varies strategies are necessary for energy conservation among China’s industrial sectors.

  8. Probability in quantum mechanics

    Directory of Open Access Journals (Sweden)

    J. G. Gilson

    1982-01-01

    Full Text Available By using a fluid theory which is an alternative to quantum theory but from which the latter can be deduced exactly, the long-standing problem of how quantum mechanics is related to stochastic processes is studied. It can be seen how the Schrödinger probability density has a relationship to time spent on small sections of an orbit, just as the probability density has in some classical contexts.

  9. Quantum computing and probability.

    Science.gov (United States)

    Ferry, David K

    2009-11-25

    Over the past two decades, quantum computing has become a popular and promising approach to trying to solve computationally difficult problems. Missing in many descriptions of quantum computing is just how probability enters into the process. Here, we discuss some simple examples of how uncertainty and probability enter, and how this and the ideas of quantum computing challenge our interpretations of quantum mechanics. It is found that this uncertainty can lead to intrinsic decoherence, and this raises challenges for error correction.

  10. Quantum computing and probability

    International Nuclear Information System (INIS)

    Ferry, David K

    2009-01-01

    Over the past two decades, quantum computing has become a popular and promising approach to trying to solve computationally difficult problems. Missing in many descriptions of quantum computing is just how probability enters into the process. Here, we discuss some simple examples of how uncertainty and probability enter, and how this and the ideas of quantum computing challenge our interpretations of quantum mechanics. It is found that this uncertainty can lead to intrinsic decoherence, and this raises challenges for error correction. (viewpoint)

  11. The promotion and control functions of atomic energy law

    International Nuclear Information System (INIS)

    Roser, T.

    1998-01-01

    The question about the purpose of atomic energy law may sound superfluous in Germany, a country where a highly differential legal framework for the peaceful utilization of nuclear power has existed for nearly 40 years in the Basic Law, the Atomic Energy Act, and its ordinances, and a comprehensive body of case laws. Yet, it is justified in view of the declared intention of the German federal government to establish an environmental code into which atomic energy law, hitherto an independent branch of the law, would be integrated, and it is justified also in view of persistent complaints that the present rules and regulations stifled investment activities. A look into some codes of law may help answer the question. Already in 1959, the authors of the Atomic Energy Act outlined the purposes of the legislation in relatively clear terms in Section 1. Besides the two foreign policy aspects of security and loyalty under treaties, which do not concern us in this connection, the key purposes of atomic energy law are stated there as promotion and protection. The protection purpose, which implies the need to protect life, health, and property from the hazards of nuclear energy and harmful effects of ionizing radiation, ranks second in the Act. In accordance with the ruling in 1972 of the Federal Administrative Court, however, it should rank at the top. (orig.) [de

  12. Probable approaches to develop particle beam energy drivers and to calculate wall material ablation with X ray radiation from imploded targets

    International Nuclear Information System (INIS)

    Kasuya, K.; Funatsu, M.; Saitoh, S.

    2001-01-01

    The first subject was the development of future ion beam driver with medium-mass ion specie. This may enable us to develop a compromised driver from the point of view of the micro-divergence angle and the cost. We produced nitrogen ion beams, and measured the micro-divergence angle on the anode surface. The measured value was 5-6mrad for the above beam with 300-400keV energy, 300A peak current and 50ns duration. This value was enough small and tolerable for the future energy driver. The corresponding value for the proton beam with higher peak current was 20-30mrad, which was too large. So that, the scale-up experiment with the above kind of medium-mass ion beam must be realized urgently to clarify the beam characteristics in more details. The reactor wall ablation with the implosion X-ray was also calculated as the second subject in this paper. (author)

  13. A scenario analysis of future energy systems based on an energy flow model represented as functionals of technology options

    International Nuclear Information System (INIS)

    Kikuchi, Yasunori; Kimura, Seiichiro; Okamoto, Yoshitaka; Koyama, Michihisa

    2014-01-01

    Highlights: • Energy flow model was represented as the functionals of technology options. • Relationships among available technologies can be visualized by developed model. • Technology roadmapping can be incorporated into the model as technical scenario. • Combination of technologies can increase their contribution to the environment. - Abstract: The design of energy systems has become an issue all over the world. A single optimal system cannot be suggested because the availability of infrastructure and resources and the acceptability of the system should be discussed locally, involving all related stakeholders in the energy system. In particular, researchers and engineers of technologies related to energy systems should be able to perform the forecasting and roadmapping of future energy systems and indicate quantitative results of scenario analyses. We report an energy flow model developed for analysing scenarios of future Japanese energy systems implementing a variety of feasible technology options. The model was modularized and represented as functionals of appropriate technology options, which enables the aggregation and disaggregation of energy systems by defining functionals for single technologies, packages integrating multi-technologies, and mini-systems such as regions implementing industrial symbiosis. Based on the model, the combinations of technologies on both energy supply and demand sides can be addressed considering not only the societal scenarios such as resource prices, economic growth and population change but also the technical scenarios including the development and penetration of energy-related technologies such as distributed solid oxide fuel cells in residential sectors and new-generation vehicles, and the replacement and shift of current technologies such as heat pumps for air conditioning and centralized power generation. The developed model consists of two main modules; namely, a power generation dispatching module for the

  14. A turbulent time scale based k–ε model for probability density function modeling of turbulence/chemistry interactions: Application to HCCI combustion

    International Nuclear Information System (INIS)

    Maroteaux, Fadila; Pommier, Pierre-Lin

    2013-01-01

    Highlights: ► Turbulent time evolution is introduced in stochastic modeling approach. ► The particles number is optimized trough a restricted initial distribution. ► The initial distribution amplitude is modeled by magnitude of turbulence field. -- Abstract: Homogenous Charge Compression Ignition (HCCI) engine technology is known as an alternative to reduce NO x and particulate matter (PM) emissions. As shown by several experimental studies published in the literature, the ideally homogeneous mixture charge becomes stratified in composition and temperature, and turbulent mixing is found to play an important role in controlling the combustion progress. In a previous study, an IEM model (Interaction by Exchange with the Mean) has been used to describe the micromixing in a stochastic reactor model that simulates the HCCI process. The IEM model is a deterministic model, based on the principle that the scalar value approaches the mean value over the entire volume with a characteristic mixing time. In this previous model, the turbulent time scale was treated as a fixed parameter. The present study focuses on the development of a micro-mixing time model, in order to take into account the physical phenomena it stands for. For that purpose, a (k–ε) model is used to express this micro-mixing time model. The turbulence model used here is based on zero dimensional energy cascade applied during the compression and the expansion cycle; mean kinetic energy is converted to turbulent kinetic energy. Turbulent kinetic energy is converted to heat through viscous dissipation. Besides, in this study a relation to calculate the initial heterogeneities amplitude is proposed. The comparison of simulation results against experimental data shows overall satisfactory agreement at variable turbulent time scale

  15. Irreversibility and conditional probability

    International Nuclear Information System (INIS)

    Stuart, C.I.J.M.

    1989-01-01

    The mathematical entropy - unlike physical entropy - is simply a measure of uniformity for probability distributions in general. So understood, conditional entropies have the same logical structure as conditional probabilities. If, as is sometimes supposed, conditional probabilities are time-reversible, then so are conditional entropies and, paradoxically, both then share this symmetry with physical equations of motion. The paradox is, of course that probabilities yield a direction to time both in statistical mechanics and quantum mechanics, while the equations of motion do not. The supposed time-reversibility of both conditionals seems also to involve a form of retrocausality that is related to, but possibly not the same as, that described by Costa de Beaurgard. The retrocausality is paradoxically at odds with the generally presumed irreversibility of the quantum mechanical measurement process. Further paradox emerges if the supposed time-reversibility of the conditionals is linked with the idea that the thermodynamic entropy is the same thing as 'missing information' since this confounds the thermodynamic and mathematical entropies. However, it is shown that irreversibility is a formal consequence of conditional entropies and, hence, of conditional probabilities also. 8 refs. (Author)

  16. The pleasures of probability

    CERN Document Server

    Isaac, Richard

    1995-01-01

    The ideas of probability are all around us. Lotteries, casino gambling, the al­ most non-stop polling which seems to mold public policy more and more­ these are a few of the areas where principles of probability impinge in a direct way on the lives and fortunes of the general public. At a more re­ moved level there is modern science which uses probability and its offshoots like statistics and the theory of random processes to build mathematical descriptions of the real world. In fact, twentieth-century physics, in embrac­ ing quantum mechanics, has a world view that is at its core probabilistic in nature, contrary to the deterministic one of classical physics. In addition to all this muscular evidence of the importance of probability ideas it should also be said that probability can be lots of fun. It is a subject where you can start thinking about amusing, interesting, and often difficult problems with very little mathematical background. In this book, I wanted to introduce a reader with at least a fairl...

  17. Experimental Probability in Elementary School

    Science.gov (United States)

    Andrew, Lane

    2009-01-01

    Concepts in probability can be more readily understood if students are first exposed to probability via experiment. Performing probability experiments encourages students to develop understandings of probability grounded in real events, as opposed to merely computing answers based on formulae.

  18. Department of Energy Emergency Management Functional Requirements Study

    International Nuclear Information System (INIS)

    1987-05-01

    This Study, the Emergency Management Functional Requirements Study (EMFRS), identifies the physical environment, information resources, and equipment required in the DOE Headquarters Emergency Operations Center (EOC) to support the DOE staff in managing an emergency. It is the first step toward converting the present Forrestal EOC into a practical facility that will function well in each of the highly diverse types of emergencies in which the Department could be involved. 2 figs

  19. Improving Ranking Using Quantum Probability

    OpenAIRE

    Melucci, Massimo

    2011-01-01

    The paper shows that ranking information units by quantum probability differs from ranking them by classical probability provided the same data used for parameter estimation. As probability of detection (also known as recall or power) and probability of false alarm (also known as fallout or size) measure the quality of ranking, we point out and show that ranking by quantum probability yields higher probability of detection than ranking by classical probability provided a given probability of ...

  20. Genetic modulation of energy metabolism in birds through mitochondrial function

    NARCIS (Netherlands)

    Tieleman, B. Irene; Versteegh, Maaike A.; Fries, Anthony; Helm, Barbara; Dingemanse, Niels J.; Gibbs, H. Lisle; Williams, Joseph B.

    2009-01-01

    Despite their central importance for the evolution of physiological variation, the genetic mechanisms that determine energy expenditure in animals have largely remained unstudied. We used quantitative genetics to confirm that both mass-specific and whole-organism basal metabolic rate (BMR) were

  1. Ground state energy and wave function of an off-centre donor in spherical core/shell nanostructures: Dielectric mismatch and impurity position effects

    Energy Technology Data Exchange (ETDEWEB)

    Ibral, Asmaa [Equipe d’Optique et Electronique du Solide, Département de Physique, Faculté des Sciences, Université Chouaïb Doukkali, B.P. 20 El Jadida Principale, El Jadida 24000 (Morocco); Laboratoire d’Instrumentation, Mesure et Contrôle, Département de Physique, Université Chouaïb Doukkali, B.P. 20 El Jadida Principale, El Jadida (Morocco); Zouitine, Asmae [Département de Physique, Ecole Nationale Supérieure d’Enseignement Technique, Université Mohammed V Souissi, B.P. 6207 Rabat-Instituts, Rabat (Morocco); Assaid, El Mahdi, E-mail: eassaid@yahoo.fr [Equipe d’Optique et Electronique du Solide, Département de Physique, Faculté des Sciences, Université Chouaïb Doukkali, B.P. 20 El Jadida Principale, El Jadida 24000 (Morocco); Laboratoire d’Instrumentation, Mesure et Contrôle, Département de Physique, Université Chouaïb Doukkali, B.P. 20 El Jadida Principale, El Jadida (Morocco); Feddi, El Mustapha [Département de Physique, Ecole Nationale Supérieure d’Enseignement Technique, Université Mohammed V Souissi, B.P. 6207 Rabat-Instituts, Rabat (Morocco); and others

    2014-09-15

    Ground state energy and wave function of a hydrogen-like off-centre donor impurity, confined anywhere in a ZnS/CdSe spherical core/shell nanostructure are determined in the framework of the envelope function approximation. Conduction band-edge alignment between core and shell of nanostructure is described by a finite height barrier. Dielectric constant mismatch at the surface where core and shell materials meet is taken into account. Electron effective mass mismatch at the inner surface between core and shell is considered. A trial wave function where coulomb attraction between electron and off-centre ionized donor is used to calculate ground state energy via the Ritz variational principle. The numerical approach developed enables access to the dependence of binding energy, coulomb correlation parameter, spatial extension and radial probability density with respect to core radius, shell radius and impurity position inside ZnS/CdSe core/shell nanostructure.

  2. Ground state energy and wave function of an off-centre donor in spherical core/shell nanostructures: Dielectric mismatch and impurity position effects

    International Nuclear Information System (INIS)

    Ibral, Asmaa; Zouitine, Asmae; Assaid, El Mahdi; Feddi, El Mustapha

    2014-01-01

    Ground state energy and wave function of a hydrogen-like off-centre donor impurity, confined anywhere in a ZnS/CdSe spherical core/shell nanostructure are determined in the framework of the envelope function approximation. Conduction band-edge alignment between core and shell of nanostructure is described by a finite height barrier. Dielectric constant mismatch at the surface where core and shell materials meet is taken into account. Electron effective mass mismatch at the inner surface between core and shell is considered. A trial wave function where coulomb attraction between electron and off-centre ionized donor is used to calculate ground state energy via the Ritz variational principle. The numerical approach developed enables access to the dependence of binding energy, coulomb correlation parameter, spatial extension and radial probability density with respect to core radius, shell radius and impurity position inside ZnS/CdSe core/shell nanostructure

  3. Estimating Subjective Probabilities

    DEFF Research Database (Denmark)

    Andersen, Steffen; Fountain, John; Harrison, Glenn W.

    2014-01-01

    either construct elicitation mechanisms that control for risk aversion, or construct elicitation mechanisms which undertake 'calibrating adjustments' to elicited reports. We illustrate how the joint estimation of risk attitudes and subjective probabilities can provide the calibration adjustments...... that theory calls for. We illustrate this approach using data from a controlled experiment with real monetary consequences to the subjects. This allows the observer to make inferences about the latent subjective probability, under virtually any well-specified model of choice under subjective risk, while still...

  4. Introduction to imprecise probabilities

    CERN Document Server

    Augustin, Thomas; de Cooman, Gert; Troffaes, Matthias C M

    2014-01-01

    In recent years, the theory has become widely accepted and has been further developed, but a detailed introduction is needed in order to make the material available and accessible to a wide audience. This will be the first book providing such an introduction, covering core theory and recent developments which can be applied to many application areas. All authors of individual chapters are leading researchers on the specific topics, assuring high quality and up-to-date contents. An Introduction to Imprecise Probabilities provides a comprehensive introduction to imprecise probabilities, includin

  5. Classic Problems of Probability

    CERN Document Server

    Gorroochurn, Prakash

    2012-01-01

    "A great book, one that I will certainly add to my personal library."—Paul J. Nahin, Professor Emeritus of Electrical Engineering, University of New Hampshire Classic Problems of Probability presents a lively account of the most intriguing aspects of statistics. The book features a large collection of more than thirty classic probability problems which have been carefully selected for their interesting history, the way they have shaped the field, and their counterintuitive nature. From Cardano's 1564 Games of Chance to Jacob Bernoulli's 1713 Golden Theorem to Parrondo's 1996 Perplexin

  6. The probability representation as a new formulation of quantum mechanics

    International Nuclear Information System (INIS)

    Man'ko, Margarita A; Man'ko, Vladimir I

    2012-01-01

    We present a new formulation of conventional quantum mechanics, in which the notion of a quantum state is identified via a fair probability distribution of the position measured in a reference frame of the phase space with rotated axes. In this formulation, the quantum evolution equation as well as the equation for finding energy levels are expressed as linear equations for the probability distributions that determine the quantum states. We also give the integral transforms relating the probability distribution (called the tomographic-probability distribution or the state tomogram) to the density matrix and the Wigner function and discuss their connection with the Radon transform. Qudit states are considered and the invertible map of the state density operators onto the probability vectors is discussed. The tomographic entropies and entropic uncertainty relations are reviewed. We demonstrate the uncertainty relations for the position and momentum and the entropic uncertainty relations in the tomographic-probability representation, which is suitable for an experimental check of the uncertainty relations.

  7. Energy and energy gradient matrix elements with N-particle explicitly correlated complex Gaussian basis functions with L =1

    Science.gov (United States)

    Bubin, Sergiy; Adamowicz, Ludwik

    2008-03-01

    In this work we consider explicitly correlated complex Gaussian basis functions for expanding the wave function of an N-particle system with the L =1 total orbital angular momentum. We derive analytical expressions for various matrix elements with these basis functions including the overlap, kinetic energy, and potential energy (Coulomb interaction) matrix elements, as well as matrix elements of other quantities. The derivatives of the overlap, kinetic, and potential energy integrals with respect to the Gaussian exponential parameters are also derived and used to calculate the energy gradient. All the derivations are performed using the formalism of the matrix differential calculus that facilitates a way of expressing the integrals in an elegant matrix form, which is convenient for the theoretical analysis and the computer implementation. The new method is tested in calculations of two systems: the lowest P state of the beryllium atom and the bound P state of the positronium molecule (with the negative parity). Both calculations yielded new, lowest-to-date, variational upper bounds, while the number of basis functions used was significantly smaller than in previous studies. It was possible to accomplish this due to the use of the analytic energy gradient in the minimization of the variational energy.

  8. Energy and energy gradient matrix elements with N-particle explicitly correlated complex Gaussian basis functions with L=1.

    Science.gov (United States)

    Bubin, Sergiy; Adamowicz, Ludwik

    2008-03-21

    In this work we consider explicitly correlated complex Gaussian basis functions for expanding the wave function of an N-particle system with the L=1 total orbital angular momentum. We derive analytical expressions for various matrix elements with these basis functions including the overlap, kinetic energy, and potential energy (Coulomb interaction) matrix elements, as well as matrix elements of other quantities. The derivatives of the overlap, kinetic, and potential energy integrals with respect to the Gaussian exponential parameters are also derived and used to calculate the energy gradient. All the derivations are performed using the formalism of the matrix differential calculus that facilitates a way of expressing the integrals in an elegant matrix form, which is convenient for the theoretical analysis and the computer implementation. The new method is tested in calculations of two systems: the lowest P state of the beryllium atom and the bound P state of the positronium molecule (with the negative parity). Both calculations yielded new, lowest-to-date, variational upper bounds, while the number of basis functions used was significantly smaller than in previous studies. It was possible to accomplish this due to the use of the analytic energy gradient in the minimization of the variational energy.

  9. Functional zinc oxide nanostructures for electronic and energy applications

    Science.gov (United States)

    Prasad, Abhishek

    ZnO has proven to be a multifunctional material with important nanotechnological applications. ZnO nanostructures can be grown in various forms such as nanowires, nanorods, nanobelts, nanocombs etc. In this work, ZnO nanostructures are grown in a double quartz tube configuration thermal Chemical Vapor Deposition (CVD) system. We focus on functionalized ZnO Nanostructures by controlling their structures and tuning their properties for various applications. The following topics have been investigated: (1) We have fabricated various ZnO nanostructures using a thermal CVD technique. The growth parameters were optimized and studied for different nanostructures. (2) We have studied the application of ZnO nanowires (ZnONWs) for field effect transistors (FETs). Unintentional n-type conductivity was observed in our FETs based on as-grown ZnO NWs. We have then shown for the first time that controlled incorporation of hydrogen into ZnO NWs can introduce p-type characters to the nanowires. We further found that the n-type behaviors remained, leading to the ambipolar behaviors of hydrogen incorporated ZnO NWs. Importantly, the detected p- and n- type behaviors are stable for longer than two years when devices were kept in ambient conditions. All these can be explained by an ab initio model of Zn vacancy-Hydrogen complexes, which can serve as the donor, acceptors, or green photoluminescence quencher, depend on the number of hydrogen atoms involved. (3) Next ZnONWs were tested for electron field emission. We focus on reducing the threshold field (Eth) of field emission from non-aligned ZnO NWs. As encouraged by our results on enhancing the conductivity of ZnO NWs by hydrogen annealing described in Chapter 3, we have studied the effect of hydrogen annealing for improving field emission behavior of our ZnO NWs. We found that optimally annealed ZnO NWs offered much lower threshold electric field and improved emission stability. We also studied field emission from ZnO NWs at moderate

  10. Energy Availability and Reproductive Function in Female Endurance Athletes

    DEFF Research Database (Denmark)

    Melin, Anna Katarina

    and reduced EA, as well as those with oligomenorrhea/FHA, had lower RMR compared to those with either current optimal EA or eumenorrheic athletes. Furthermore, athletes with secondary FHA had increased work efficiency compared to eumenorrheic subjects, indicating a more profound metabolic adaptation in female...... athletes with clinical menstrual dysfunction. All three Triad conditions were common in this group of athletes, despite a normal BMI range and body composition. Furthermore, issues and physiological symptoms related to current low and reduced EA and oligomenorrhea/FHA were not limited to impaired bone...... health, but also included hypoglycaemia, hypercholesterolemia, and hypotension. The results indicated that diets lower in energy density, fat content, compact carbohydrate-rich foods and energy-containing drinks, together with higher fibre content, were associated with current low and reduced EA...

  11. The role of dual-energy computed tomography in the assessment of pulmonary function

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Hye Jeon [Department of Radiology, Hallym University College of Medicine, Hallym University Sacred Heart Hospital, 22, Gwanpyeong-ro 170beon-gil, Dongan-gu, Anyang-si, Gyeonggi-do 431-796 (Korea, Republic of); Hoffman, Eric A. [Departments of Radiology, Medicine, and Biomedical Engineering, University of Iowa, 200 Hawkins Dr, CC 701 GH, Iowa City, IA 52241 (United States); Lee, Chang Hyun; Goo, Jin Mo [Department of Radiology, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul 110-799 (Korea, Republic of); Levin, David L. [Department of Radiology, Mayo Clinic College of Medicine, 200 First Street, SW, Rochester, MN 55905 (United States); Kauczor, Hans-Ulrich [Diagnostic and Interventional Radiology, University Hospital Heidelberg, Im Neuenheimer Feld 400, 69120 Heidelberg (Germany); Translational Lung Research Center Heidelberg (TLRC), Member of the German Center for Lung Research (DZL), Im Neuenheimer Feld 400, 69120 Heidelberg (Germany); Seo, Joon Beom, E-mail: seojb@amc.seoul.kr [Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 388-1, Pungnap 2-dong, Songpa-ku, Seoul, 05505 (Korea, Republic of)

    2017-01-15

    Highlights: • The dual-energy CT technique enables the differentiation of contrast materials with material decomposition algorithm. • Pulmonary functional information can be evaluated using dual-energy CT with anatomic CT information, simultaneously. • Pulmonary functional information from dual-energy CT can improve diagnosis and severity assessment of diseases. - Abstract: The assessment of pulmonary function, including ventilation and perfusion status, is important in addition to the evaluation of structural changes of the lung parenchyma in various pulmonary diseases. The dual-energy computed tomography (DECT) technique can provide the pulmonary functional information and high resolution anatomic information simultaneously. The application of DECT for the evaluation of pulmonary function has been investigated in various pulmonary diseases, such as pulmonary embolism, asthma and chronic obstructive lung disease and so on. In this review article, we will present principles and technical aspects of DECT, along with clinical applications for the assessment pulmonary function in various lung diseases.

  12. Formation energies of rutile metal dioxides using density functional theory

    DEFF Research Database (Denmark)

    Martinez, Jose Ignacio; Hansen, Heine Anton; Rossmeisl, Jan

    2009-01-01

    We apply standard density functional theory at the generalized gradient approximation (GGA) level to study the stability of rutile metal oxides. It is well known that standard GGA exchange and correlation in some cases is not sufficient to address reduction and oxidation reactions. Especially...... and due to a more accurate description of exchange for this particular GGA functional compared to PBE. Furthermore, we would expect the self-interaction problem to be largest for the most localized d orbitals; that means the late 3d metals and since Co, Fe, Ni, and Cu do not form rutile oxides...

  13. Orthogonal Algorithm of Logic Probability and Syndrome-Testable Analysis

    Institute of Scientific and Technical Information of China (English)

    1990-01-01

    A new method,orthogonal algoritm,is presented to compute the logic probabilities(i.e.signal probabilities)accurately,The transfer properties of logic probabilities are studied first,which are useful for the calculation of logic probability of the circuit with random independent inputs.Then the orthogonal algoritm is described to compute the logic probability of Boolean function realized by a combinational circuit.This algorithm can make Boolean function “ORTHOGONAL”so that the logic probabilities can be easily calculated by summing up the logic probabilities of all orthogonal terms of the Booleam function.

  14. Counterexamples in probability

    CERN Document Server

    Stoyanov, Jordan M

    2013-01-01

    While most mathematical examples illustrate the truth of a statement, counterexamples demonstrate a statement's falsity. Enjoyable topics of study, counterexamples are valuable tools for teaching and learning. The definitive book on the subject in regards to probability, this third edition features the author's revisions and corrections plus a substantial new appendix.

  15. Epistemology and Probability

    CERN Document Server

    Plotnitsky, Arkady

    2010-01-01

    Offers an exploration of the relationships between epistemology and probability in the work of Niels Bohr, Werner Heisenberg, and Erwin Schrodinger; in quantum mechanics; and in modern physics. This book considers the implications of these relationships and of quantum theory for our understanding of the nature of thinking and knowledge in general

  16. Transition probabilities for atoms

    International Nuclear Information System (INIS)

    Kim, Y.K.

    1980-01-01

    Current status of advanced theoretical methods for transition probabilities for atoms and ions is discussed. An experiment on the f values of the resonance transitions of the Kr and Xe isoelectronic sequences is suggested as a test for the theoretical methods

  17. Expanded explorations into the optimization of an energy function for protein design

    Science.gov (United States)

    Huang, Yao-ming; Bystroff, Christopher

    2014-01-01

    Nature possesses a secret formula for the energy as a function of the structure of a protein. In protein design, approximations are made to both the structural representation of the molecule and to the form of the energy equation, such that the existence of a general energy function for proteins is by no means guaranteed. Here we present new insights towards the application of machine learning to the problem of finding a general energy function for protein design. Machine learning requires the definition of an objective function, which carries with it the implied definition of success in protein design. We explored four functions, consisting of two functional forms, each with two criteria for success. Optimization was carried out by a Monte Carlo search through the space of all variable parameters. Cross-validation of the optimized energy function against a test set gave significantly different results depending on the choice of objective function, pointing to relative correctness of the built-in assumptions. Novel energy cross-terms correct for the observed non-additivity of energy terms and an imbalance in the distribution of predicted amino acids. This paper expands on the work presented at ACM-BCB, Orlando FL , October 2012. PMID:24384706

  18. Study on energy demand function of korea considering replacement among energy sources and the structural changes of demand behavior

    Energy Technology Data Exchange (ETDEWEB)

    Moon, C.K. [Korea Energy Economics Institute, Euiwang (Korea, Republic of)

    1997-08-01

    If the necessity of careful study on energy function is mentioned, it should be stressed that energy investment not only needs a long gestation period but also, acts as the bottleneck in the production capacity of an economy when investment is not enough. Thereby, the adverse effect of an energy supply shortage is very big. Especially, the replacement/supplemental relationship between energy and capital which corresponds to the movement on the iso-quanta curve is believed to have a direct relation with the answer as to whether long-term economic development would be possible under an energy crisis and its influence on technology selection. Furthermore, the advantages of technological advances which correspond to the movement on the iso-quanta curve has a direct relation with the question whether long-term economic development would be possible under an energy crisis depending on whether its direction is toward energy-saving or energy-consuming. This study tackles the main issues and outlines of the quantitative approach method based on the accounting approach method for modeling energy demand, quantitative economics approach method, and production model. In order to model energy demand of the Korean manufacturing industry, related data was established and a positive analytical model is completed and presented based on these. 122 refs., 10 tabs.

  19. Magnetic field effects on the quantum wire energy spectrum and Green's function

    International Nuclear Information System (INIS)

    Morgenstern Horing, Norman J.

    2010-01-01

    We analyze the energy spectrum and propagation of electrons in a quantum wire on a 2D host medium in a normal magnetic field, representing the wire by a 1D Dirac delta function potential which would support just a single subband state in the absence of the magnetic field. The associated Schroedinger Green's function for the quantum wire is derived in closed form in terms of known functions and the Landau quantized subband energy spectrum is examined.

  20. Energy substrates to support glutamatergic and GABAergic synaptic function

    DEFF Research Database (Denmark)

    Schousboe, Arne; Bak, Lasse K; Sickmann, Helle M

    2007-01-01

    under normal conditions is glucose but at the cellular level, i.e., neurons and astrocytes, lactate may play an important role as well. In addition to this the possibility exists that glycogen, which functions as a glucose storage molecule and which is only present in astrocytes, could play a role...

  1. Energy vs. density on paths toward exact density functionals

    DEFF Research Database (Denmark)

    Kepp, Kasper Planeta

    2018-01-01

    Recently, the progression toward more exact density functional theory has been questioned, implying a need for more formal ways to systematically measure progress, i.e. a “path”. Here I use the Hohenberg-Kohn theorems and the definition of normality by Burke et al. to define a path toward exactness...

  2. An introduction to probability and stochastic processes

    CERN Document Server

    Melsa, James L

    2013-01-01

    Geared toward college seniors and first-year graduate students, this text is designed for a one-semester course in probability and stochastic processes. Topics covered in detail include probability theory, random variables and their functions, stochastic processes, linear system response to stochastic processes, Gaussian and Markov processes, and stochastic differential equations. 1973 edition.

  3. The probability of the false vacuum decay

    International Nuclear Information System (INIS)

    Kiselev, V.; Selivanov, K.

    1983-01-01

    The closed expession for the probability of the false vacuum decay in (1+1) dimensions is given. The probability of false vacuum decay is expessed as the product of exponential quasiclassical factor and a functional determinant of the given form. The method for calcutation of this determinant is developed and a complete answer for (1+1) dimensions is given

  4. Probability biases as Bayesian inference

    Directory of Open Access Journals (Sweden)

    Andre; C. R. Martins

    2006-11-01

    Full Text Available In this article, I will show how several observed biases in human probabilistic reasoning can be partially explained as good heuristics for making inferences in an environment where probabilities have uncertainties associated to them. Previous results show that the weight functions and the observed violations of coalescing and stochastic dominance can be understood from a Bayesian point of view. We will review those results and see that Bayesian methods should also be used as part of the explanation behind other known biases. That means that, although the observed errors are still errors under the be understood as adaptations to the solution of real life problems. Heuristics that allow fast evaluations and mimic a Bayesian inference would be an evolutionary advantage, since they would give us an efficient way of making decisions. %XX In that sense, it should be no surprise that humans reason with % probability as it has been observed.

  5. Negative probability in the framework of combined probability

    OpenAIRE

    Burgin, Mark

    2013-01-01

    Negative probability has found diverse applications in theoretical physics. Thus, construction of sound and rigorous mathematical foundations for negative probability is important for physics. There are different axiomatizations of conventional probability. So, it is natural that negative probability also has different axiomatic frameworks. In the previous publications (Burgin, 2009; 2010), negative probability was mathematically formalized and rigorously interpreted in the context of extende...

  6. Of energy and the economy. Theory and evidence of their functional relationships

    Energy Technology Data Exchange (ETDEWEB)

    Chang, V.

    2007-07-01

    The author of the contribution under consideration offers a set of explicit functional relationships that link energy and the economy. Despite the reliance on energy permeating the whole economy, no such complete relationships had been presented before. The relevant questions are: (a) How related are energy and the economy? (b) What role does energy play in the economic growth? Under this aspect, the author theorizes the role of energy and then tests it with economic models, using data from 16 OECD countries from 1980 to 2001. The main results are the following: (a) Energy is a cross-country representative good whose prices are equalized when converted to a reference currency. Thus, energy prices satisfy the purchasing power parity. For all but one country, the half life of the real exchange rate is less than a year and as low as six months, shorter than those derived by other real exchange rate measures; (b) Considering energy a cross-time representative good, a country's utility function is inversely proportional to both its income share of energy and its energy price. The author obtains an explicit, unified two-dimensional (cross countries and time) production function with energy and non-energy as the two inputs; (c) The author concludes a cross-country parity relationship for income shares of energy, similar to that for energy prices. Furthermore, the author provides an intertemporal connection between the trajectory of the income share of energy and the productivity growth of the economy; (d) The author demonstrates the tradeoffs between energy efficiency and economic wellbeing, with the energy price being the medium of the tradeoffs.

  7. Four-point correlation function of stress-energy tensors in N=4 superconformal theories

    CERN Document Server

    Korchemsky, G P

    2015-01-01

    We derive the explicit expression for the four-point correlation function of stress-energy tensors in four-dimensional N=4 superconformal theory. We show that it has a remarkably simple and suggestive form allowing us to predict a large class of four-point correlation functions involving the stress-energy tensor and other conserved currents. We then apply the obtained results on the correlation functions to computing the energy-energy correlations, which measure the flow of energy in the final states created from the vacuum by a source. We demonstrate that they are given by a universal function independent of the choice of the source. Our analysis relies only on N=4 superconformal symmetry and does not use the dynamics of the theory.

  8. Nonlocal exchange and kinetic-energy density functionals for electronic systems

    International Nuclear Information System (INIS)

    Glossman, M.D.; Rubio, A.; Balbas, L.C.; Alonso, J.A.

    1992-01-01

    The nonlocal weighted density approximation (WDA) to the exchange and kinetic-energy functionals of many electron systems proposed several years ago by Alonso and Girifalco is used to compute, within the framework of density functional theory, the ground-state electronic density and total energy of noble gas atoms and of neutral jellium-like sodium clusters containing up to 500 atoms. These results are compared with analogous calculations using the well known Thomas-Fermi-Weizsacker-Dirac (TFWD) approximations for the kinetic (TFW) and exchange (D) energy density functionals. An outstanding improvement of the total and exchange energies, of the density at the nucleus and of the expectation values is obtained for atoms within the WDA scheme. For sodium clusters the authors notice a sizeable contribution of the nonlocal effects to the total energy and to the density profiles. In the limit of very large clusters these effects should affect the surface energy of the bulk metal

  9. Analytical potential energy function for the Br + H{sub 2} system

    Energy Technology Data Exchange (ETDEWEB)

    Kurosaki, Yuzuru [Japan Atomic Energy Research Inst., Kizu, Kyoto (Japan). Kansai Research Establishment

    2001-10-01

    Analytical functions with a many-body expansion for the ground and first-excited-state potential energy surfaces for the Br+H{sub 2} system are newly presented in this work. These functions describe the abstraction and exchange reactions qualitatively well, although it has been found that the function for the ground-state potential surface is still quantitatively unsatisfactory. (author)

  10. Equation satisfied by electron-electron mutual Coulomb repulsion energy density functional

    OpenAIRE

    Joubert, Daniel P.

    2011-01-01

    The electron-electron mutual Coulomb repulsion energy density functional satisfies an equation that links functionals and functional derivatives at N-electron and (N-1)-electron densities for densities determined from the same adiabatic scaled external potential for the N-electron system.

  11. The Changes of Energy Interactions between Nucleus Function and Mitochondria Functions Causing Transmutation of Chronic Inflammation into Cancer Metabolism.

    Science.gov (United States)

    Ponizovskiy, Michail R

    2016-01-01

    Interactions between nucleus and mitochondria functions induce the mechanism of maintenance stability of cellular internal energy according to the first law of thermodynamics in able-bodied cells and changes the mechanisms of maintenance stability of cellular internal energy creating a transition stationary state of ablebodied cells into quasi-stationary pathologic states of acute inflammation transiting then into chronic inflammation and then transmuting into cancer metabolism. The mechanisms' influences of intruding etiologic pathologic agents (microbe, virus, etc.) lead to these changes of energy interactions between nucleus and mitochondria functions causing general acute inflammation, then passing into local chronic inflammation, and reversing into cancer metabolism transmutation. Interactions between biochemical processes and biophysical processes of cellular capacitors' operations create a supplementary mechanism of maintenance stability of cellular internal energy in the norm and in pathology. Discussion of some scientific works eliminates doubts of the authors of these works.

  12. Evaluation of the Effects of Different Energy Drinks and Coffee on Endothelial Function.

    Science.gov (United States)

    Molnar, Janos; Somberg, John C

    2015-11-01

    Endothelial function plays an important role in circulatory physiology. There has been differing reports on the effect of energy drink on endothelial function. We set out to evaluate the effect of 3 energy drinks and coffee on endothelial function. Endothelial function was evaluated in healthy volunteers using a device that uses digital peripheral arterial tonometry measuring endothelial function as the reactive hyperemia index (RHI). Six volunteers (25 ± 7 years) received energy drink in a random order at least 2 days apart. Drinks studied were 250 ml "Red Bull" containing 80 mg caffeine, 57 ml "5-hour Energy" containing 230 mg caffeine, and a can of 355 ml "NOS" energy drink containing 120 mg caffeine. Sixteen volunteers (25 ± 5 years) received a cup of 473 ml coffee containing 240 mg caffeine. Studies were performed before drink (baseline) at 1.5 and 4 hours after drink. Two of the energy drinks (Red Bull and 5-hour Energy) significantly improved endothelial function at 4 hours after drink, whereas 1 energy drink (NOS) and coffee did not change endothelial function significantly. RHI increased by 82 ± 129% (p = 0.028) and 63 ± 37% (p = 0.027) after 5-hour Energy and Red Bull, respectively. The RHI changed after NOS by 2 ± 30% (p = 1.000) and by 7 ± 30% (p = 1.000) after coffee. In conclusion, some energy drinks appear to significantly improve endothelial function. Caffeine does not appear to be the component responsible for these differences. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Contributions to quantum probability

    International Nuclear Information System (INIS)

    Fritz, Tobias

    2010-01-01

    Chapter 1: On the existence of quantum representations for two dichotomic measurements. Under which conditions do outcome probabilities of measurements possess a quantum-mechanical model? This kind of problem is solved here for the case of two dichotomic von Neumann measurements which can be applied repeatedly to a quantum system with trivial dynamics. The solution uses methods from the theory of operator algebras and the theory of moment problems. The ensuing conditions reveal surprisingly simple relations between certain quantum-mechanical probabilities. It also shown that generally, none of these relations holds in general probabilistic models. This result might facilitate further experimental discrimination between quantum mechanics and other general probabilistic theories. Chapter 2: Possibilistic Physics. I try to outline a framework for fundamental physics where the concept of probability gets replaced by the concept of possibility. Whereas a probabilistic theory assigns a state-dependent probability value to each outcome of each measurement, a possibilistic theory merely assigns one of the state-dependent labels ''possible to occur'' or ''impossible to occur'' to each outcome of each measurement. It is argued that Spekkens' combinatorial toy theory of quantum mechanics is inconsistent in a probabilistic framework, but can be regarded as possibilistic. Then, I introduce the concept of possibilistic local hidden variable models and derive a class of possibilistic Bell inequalities which are violated for the possibilistic Popescu-Rohrlich boxes. The chapter ends with a philosophical discussion on possibilistic vs. probabilistic. It can be argued that, due to better falsifiability properties, a possibilistic theory has higher predictive power than a probabilistic one. Chapter 3: The quantum region for von Neumann measurements with postselection. It is determined under which conditions a probability distribution on a finite set can occur as the outcome

  14. Contributions to quantum probability

    Energy Technology Data Exchange (ETDEWEB)

    Fritz, Tobias

    2010-06-25

    Chapter 1: On the existence of quantum representations for two dichotomic measurements. Under which conditions do outcome probabilities of measurements possess a quantum-mechanical model? This kind of problem is solved here for the case of two dichotomic von Neumann measurements which can be applied repeatedly to a quantum system with trivial dynamics. The solution uses methods from the theory of operator algebras and the theory of moment problems. The ensuing conditions reveal surprisingly simple relations between certain quantum-mechanical probabilities. It also shown that generally, none of these relations holds in general probabilistic models. This result might facilitate further experimental discrimination between quantum mechanics and other general probabilistic theories. Chapter 2: Possibilistic Physics. I try to outline a framework for fundamental physics where the concept of probability gets replaced by the concept of possibility. Whereas a probabilistic theory assigns a state-dependent probability value to each outcome of each measurement, a possibilistic theory merely assigns one of the state-dependent labels ''possible to occur'' or ''impossible to occur'' to each outcome of each measurement. It is argued that Spekkens' combinatorial toy theory of quantum mechanics is inconsistent in a probabilistic framework, but can be regarded as possibilistic. Then, I introduce the concept of possibilistic local hidden variable models and derive a class of possibilistic Bell inequalities which are violated for the possibilistic Popescu-Rohrlich boxes. The chapter ends with a philosophical discussion on possibilistic vs. probabilistic. It can be argued that, due to better falsifiability properties, a possibilistic theory has higher predictive power than a probabilistic one. Chapter 3: The quantum region for von Neumann measurements with postselection. It is determined under which conditions a probability distribution on a

  15. Waste Package Misload Probability

    International Nuclear Information System (INIS)

    Knudsen, J.K.

    2001-01-01

    The objective of this calculation is to calculate the probability of occurrence for fuel assembly (FA) misloads (i.e., Fa placed in the wrong location) and FA damage during FA movements. The scope of this calculation is provided by the information obtained from the Framatome ANP 2001a report. The first step in this calculation is to categorize each fuel-handling events that occurred at nuclear power plants. The different categories are based on FAs being damaged or misloaded. The next step is to determine the total number of FAs involved in the event. Using the information, a probability of occurrence will be calculated for FA misload and FA damage events. This calculation is an expansion of preliminary work performed by Framatome ANP 2001a

  16. Probability theory and applications

    CERN Document Server

    Hsu, Elton P

    1999-01-01

    This volume, with contributions by leading experts in the field, is a collection of lecture notes of the six minicourses given at the IAS/Park City Summer Mathematics Institute. It introduces advanced graduates and researchers in probability theory to several of the currently active research areas in the field. Each course is self-contained with references and contains basic materials and recent results. Topics include interacting particle systems, percolation theory, analysis on path and loop spaces, and mathematical finance. The volume gives a balanced overview of the current status of probability theory. An extensive bibliography for further study and research is included. This unique collection presents several important areas of current research and a valuable survey reflecting the diversity of the field.

  17. Paradoxes in probability theory

    CERN Document Server

    Eckhardt, William

    2013-01-01

    Paradoxes provide a vehicle for exposing misinterpretations and misapplications of accepted principles. This book discusses seven paradoxes surrounding probability theory.  Some remain the focus of controversy; others have allegedly been solved, however the accepted solutions are demonstrably incorrect. Each paradox is shown to rest on one or more fallacies.  Instead of the esoteric, idiosyncratic, and untested methods that have been brought to bear on these problems, the book invokes uncontroversial probability principles, acceptable both to frequentists and subjectivists. The philosophical disputation inspired by these paradoxes is shown to be misguided and unnecessary; for instance, startling claims concerning human destiny and the nature of reality are directly related to fallacious reasoning in a betting paradox, and a problem analyzed in philosophy journals is resolved by means of a computer program.

  18. Measurement uncertainty and probability

    CERN Document Server

    Willink, Robin

    2013-01-01

    A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.

  19. Model uncertainty and probability

    International Nuclear Information System (INIS)

    Parry, G.W.

    1994-01-01

    This paper discusses the issue of model uncertainty. The use of probability as a measure of an analyst's uncertainty as well as a means of describing random processes has caused some confusion, even though the two uses are representing different types of uncertainty with respect to modeling a system. The importance of maintaining the distinction between the two types is illustrated with a simple example

  20. Retrocausality and conditional probability

    International Nuclear Information System (INIS)

    Stuart, C.I.J.M.

    1989-01-01

    Costa de Beauregard has proposed that physical causality be identified with conditional probability. The proposal is shown to be vulnerable on two accounts. The first, though mathematically trivial, seems to be decisive so far as the current formulation of the proposal is concerned. The second lies in a physical inconsistency which seems to have its source in a Copenhagenlike disavowal of realism in quantum mechanics. 6 refs. (Author)