Probability distributions for magnetotellurics
Stodt, John A.
1982-11-01
Estimates of the magnetotelluric transfer functions can be viewed as ratios of two complex random variables. It is assumed that the numerator and denominator are governed approximately by a joint complex normal distribution. Under this assumption, probability distributions are obtained for the magnitude, squared magnitude, logarithm of the squared magnitude, and the phase of the estimates. Normal approximations to the distributions are obtained by calculating mean values and variances from error propagation, and the distributions are plotted with their normal approximations for different percentage errors in the numerator and denominator of the estimates, ranging from 10% to 75%. The distribution of the phase is approximated well by a normal distribution for the range of errors considered, while the distribution of the logarithm of the squared magnitude is approximated by a normal distribution for a much larger range of errors than is the distribution of the squared magnitude. The distribution of the squared magnitude is most sensitive to the presence of noise in the denominator of the estimate, in which case the true distribution deviates significantly from normal behavior as the percentage errors exceed 10%. In contrast, the normal approximation to the distribution of the logarithm of the magnitude is useful for errors as large as 75%.
Superpositions of probability distributions
Jizba, Petr; Kleinert, Hagen
2008-09-01
Probability distributions which can be obtained from superpositions of Gaussian distributions of different variances v=σ2 play a favored role in quantum theory and financial markets. Such superpositions need not necessarily obey the Chapman-Kolmogorov semigroup relation for Markovian processes because they may introduce memory effects. We derive the general form of the smearing distributions in v which do not destroy the semigroup property. The smearing technique has two immediate applications. It permits simplifying the system of Kramers-Moyal equations for smeared and unsmeared conditional probabilities, and can be conveniently implemented in the path integral calculus. In many cases, the superposition of path integrals can be evaluated much easier than the initial path integral. Three simple examples are presented, and it is shown how the technique is extended to quantum mechanics.
Superpositions of probability distributions.
Jizba, Petr; Kleinert, Hagen
2008-09-01
Probability distributions which can be obtained from superpositions of Gaussian distributions of different variances v=sigma;{2} play a favored role in quantum theory and financial markets. Such superpositions need not necessarily obey the Chapman-Kolmogorov semigroup relation for Markovian processes because they may introduce memory effects. We derive the general form of the smearing distributions in v which do not destroy the semigroup property. The smearing technique has two immediate applications. It permits simplifying the system of Kramers-Moyal equations for smeared and unsmeared conditional probabilities, and can be conveniently implemented in the path integral calculus. In many cases, the superposition of path integrals can be evaluated much easier than the initial path integral. Three simple examples are presented, and it is shown how the technique is extended to quantum mechanics.
Probability distribution relationships
Yousry Abdelkader
2013-05-01
Full Text Available In this paper, we are interesting to show the most famous distributions and their relations to the other distributions in collected diagrams. Four diagrams are sketched as networks. The first one is concerned to the continuous distributions and their relations. The second one presents the discrete distributions. The third diagram is depicted the famous limiting distributions. Finally, the Balakrishnan skew-normal density and its relationship with the other distributions are shown in the fourth diagram.
Diurnal distribution of sunshine probability
Aydinli, S.
1982-01-01
The diurnal distribution of the sunshine probability is essential for the predetermination of average irradiances and illuminances by solar radiation on sloping surfaces. The most meteorological stations have only monthly average values of the sunshine duration available. It is, therefore, necessary to compute the diurnal distribution of sunshine probability starting from the average monthly values. It is shown how the symmetric component of the distribution of the sunshine probability which is a consequence of a ''sidescene effect'' of the clouds can be calculated. The asymmetric components of the sunshine probability depending on the location and the seasons and their influence on the predetermination of the global radiation are investigated and discussed.
Exact Probability Distribution versus Entropy
Kerstin Andersson
2014-10-01
Full Text Available The problem addressed concerns the determination of the average number of successive attempts of guessing a word of a certain length consisting of letters with given probabilities of occurrence. Both first- and second-order approximations to a natural language are considered. The guessing strategy used is guessing words in decreasing order of probability. When word and alphabet sizes are large, approximations are necessary in order to estimate the number of guesses. Several kinds of approximations are discussed demonstrating moderate requirements regarding both memory and central processing unit (CPU time. When considering realistic sizes of alphabets and words (100, the number of guesses can be estimated within minutes with reasonable accuracy (a few percent and may therefore constitute an alternative to, e.g., various entropy expressions. For many probability distributions, the density of the logarithm of probability products is close to a normal distribution. For those cases, it is possible to derive an analytical expression for the average number of guesses. The proportion of guesses needed on average compared to the total number decreases almost exponentially with the word length. The leading term in an asymptotic expansion can be used to estimate the number of guesses for large word lengths. Comparisons with analytical lower bounds and entropy expressions are also provided.
Probability distributions for multimeric systems.
Albert, Jaroslav; Rooman, Marianne
2016-01-01
We propose a fast and accurate method of obtaining the equilibrium mono-modal joint probability distributions for multimeric systems. The method necessitates only two assumptions: the copy number of all species of molecule may be treated as continuous; and, the probability density functions (pdf) are well-approximated by multivariate skew normal distributions (MSND). Starting from the master equation, we convert the problem into a set of equations for the statistical moments which are then expressed in terms of the parameters intrinsic to the MSND. Using an optimization package on Mathematica, we minimize a Euclidian distance function comprising of a sum of the squared difference between the left and the right hand sides of these equations. Comparison of results obtained via our method with those rendered by the Gillespie algorithm demonstrates our method to be highly accurate as well as efficient.
ASYMPTOTIC QUANTIZATION OF PROBABILITY DISTRIBUTIONS
Klaus P(o)tzelberger
2003-01-01
We give a brief introduction to results on the asymptotics of quantization errors.The topics discussed include the quantization dimension,asymptotic distributions of sets of prototypes,asymptotically optimal quantizations,approximations and random quantizations.
The Multivariate Gaussian Probability Distribution
Ahrendt, Peter
2005-01-01
This technical report intends to gather information about the multivariate gaussian distribution, that was previously not (at least to my knowledge) to be found in one place and written as a reference manual. Additionally, some useful tips and tricks are collected that may be useful in practical...
Pre-aggregation for Probability Distributions
Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach
Motivated by the increasing need to analyze complex uncertain multidimensional data (e.g., in order to optimize and personalize location-based services), this paper proposes novel types of {\\em probabilistic} OLAP queries that operate on aggregate values that are probability distributions...... and the techniques to process these queries. The paper also presents the methods for computing the probability distributions, which enables pre-aggregation, and for using the pre-aggregated distributions for further aggregation. In order to achieve good time and space efficiency, the methods perform approximate...... multidimensional data analysis that is considered in this paper (i.e., approximate processing of probabilistic OLAP queries over probability distributions)....
Pre-aggregation for Probability Distributions
Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach
Motivated by the increasing need to analyze complex uncertain multidimensional data (e.g., in order to optimize and personalize location-based services), this paper proposes novel types of {\\em probabilistic} OLAP queries that operate on aggregate values that are probability distributions...... and the techniques to process these queries. The paper also presents the methods for computing the probability distributions, which enables pre-aggregation, and for using the pre-aggregated distributions for further aggregation. In order to achieve good time and space efficiency, the methods perform approximate...... multidimensional data analysis that is considered in this paper (i.e., approximate processing of probabilistic OLAP queries over probability distributions)....
Pre-Aggregation with Probability Distributions
Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach
2006-01-01
Motivated by the increasing need to analyze complex, uncertain multidimensional data this paper proposes probabilistic OLAP queries that are computed using probability distributions rather than atomic values. The paper describes how to create probability distributions from base data, and how...... the distributions can be subsequently used in pre-aggregation. Since the probability distributions can become large, we show how to achieve good time and space efficiency by approximating the distributions. We present the results of several experiments that demonstrate the effectiveness of our methods. The work...... is motivated with a real-world case study, based on our collaboration with a leading Danish vendor of location-based services. This paper is the first to consider the approximate processing of probabilistic OLAP queries over probability distributions....
Pre-Aggregation with Probability Distributions
Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach
2006-01-01
Motivated by the increasing need to analyze complex, uncertain multidimensional data this paper proposes probabilistic OLAP queries that are computed using probability distributions rather than atomic values. The paper describes how to create probability distributions from base data, and how...... the distributions can be subsequently used in pre-aggregation. Since the probability distributions can become large, we show how to achieve good time and space efficiency by approximating the distributions. We present the results of several experiments that demonstrate the effectiveness of our methods. The work...... is motivated with a real-world case study, based on our collaboration with a leading Danish vendor of location-based services. This paper is the first to consider the approximate processing of probabilistic OLAP queries over probability distributions....
Probability: A Matter of Life and Death
Hassani, Mehdi; Kippen, Rebecca; Mills, Terence
2016-01-01
Life tables are mathematical tables that document probabilities of dying and life expectancies at different ages in a society. Thus, the life table contains some essential features of the health of a population. Probability is often regarded as a difficult branch of mathematics. Life tables provide an interesting approach to introducing concepts…
Probability: A Matter of Life and Death
Hassani, Mehdi; Kippen, Rebecca; Mills, Terence
2016-01-01
Life tables are mathematical tables that document probabilities of dying and life expectancies at different ages in a society. Thus, the life table contains some essential features of the health of a population. Probability is often regarded as a difficult branch of mathematics. Life tables provide an interesting approach to introducing concepts…
Exact probability distribution functions for Parrondo's games
Zadourian, Rubina; Saakian, David B.; Klümper, Andreas
2016-12-01
We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.
Proposal for Modified Damage Probability Distribution Functions
Pedersen, Preben Terndrup; Hansen, Peter Friis
1996-01-01
Immidiately following the Estonia disaster, the Nordic countries establishe a project entitled "Safety of Passenger/RoRo Vessels" As part of this project the present proposal for modified damage stability probability distribution functions has been developed. and submitted to "Sub-committee on st......Immidiately following the Estonia disaster, the Nordic countries establishe a project entitled "Safety of Passenger/RoRo Vessels" As part of this project the present proposal for modified damage stability probability distribution functions has been developed. and submitted to "Sub...
Flavor-Universal Form of Neutrino Oscillation Probabilities in Matter
Minakata, Hisakazu
2015-01-01
We construct a new perturbative framework to describe neutrino oscillation in matter with the unique expansion parameter \\epsilon, which is defined as \\Delta m^2_{21} / \\Delta m^2_{ren} with the renormalized atmospheric \\Delta m^2_{ren} \\equiv \\Delta m^2_{31} - s^2_{12} \\Delta m^2_{21}. It allows us to derive the maximally compact expressions of the oscillation probabilities in matter to order \\epsilon in the form akin to those in vacuum. This feature allows immediate physical interpretation of the formulas, and facilitates understanding of physics of neutrino oscillations in matter. Moreover, quite recently, we have shown that our three-flavor oscillation probabilities P(\
Calculating Cumulative Binomial-Distribution Probabilities
Scheuer, Ernest M.; Bowerman, Paul N.
1989-01-01
Cumulative-binomial computer program, CUMBIN, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), used independently of one another. Reliabilities and availabilities of k-out-of-n systems analyzed. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Used for calculations of reliability and availability. Program written in C.
Generating pseudo-random discrete probability distributions
Maziero, Jonas, E-mail: jonasmaziero@gmail.com [Universidade Federal de Santa Maria (UFSM), RS (Brazil). Departamento de Fisica
2015-08-15
The generation of pseudo-random discrete probability distributions is of paramount importance for a wide range of stochastic simulations spanning from Monte Carlo methods to the random sampling of quantum states for investigations in quantum information science. In spite of its significance, a thorough exposition of such a procedure is lacking in the literature. In this article, we present relevant details concerning the numerical implementation and applicability of what we call the iid, normalization, and trigonometric methods for generating an unbiased probability vector p=(p{sub 1},⋯ ,p{sub d}). An immediate application of these results regarding the generation of pseudo-random pure quantum states is also described. (author)
Simple and Compact Expressions for Neutrino Oscillation Probabilities in Matter
Minakata, Hisakazu
2015-01-01
We reformulate perturbation theory for neutrino oscillations in matter with an expansion parameter related to the ratio of the solar to the atmospheric Delta m^2 scales. Unlike previous works, we use a renormalized basis in which certain first-order effects are taken into account in the zeroth-order Hamiltonian. Using this perturbation theory we derive extremely compact expressions for the neutrino oscillation probabilities in matter. We find, for example, that the $\
Probability distributions with summary graph structure
Wermuth, Nanny
2010-01-01
A set of independence statements may define the independence structure of interest in a family of joint probability distributions. This structure is often captured by a graph that consists of nodes representing the random variables and of edges that couple node pairs. One important class are multivariate regression chain graphs. They describe the independences of stepwise processes, in which at each step single or joint responses are generated given the relevant explanatory variables in their past. For joint densities that then result after possible marginalising or conditioning, we use summary graphs. These graphs reflect the independence structure implied by the generating process for the reduced set of variables and they preserve the implied independences after additional marginalising and conditioning. They can identify generating dependences which remain unchanged and alert to possibly severe distortions due to direct and indirect confounding. Operators for matrix representations of graphs are used to de...
The Probability Distribution for a Biased Spinner
Foster, Colin
2012-01-01
This article advocates biased spinners as an engaging context for statistics students. Calculating the probability of a biased spinner landing on a particular side makes valuable connections between probability and other areas of mathematics. (Contains 2 figures and 1 table.)
The Probability Distribution for a Biased Spinner
Foster, Colin
2012-01-01
This article advocates biased spinners as an engaging context for statistics students. Calculating the probability of a biased spinner landing on a particular side makes valuable connections between probability and other areas of mathematics. (Contains 2 figures and 1 table.)
Scoring Rules for Subjective Probability Distributions
Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd;
report the true subjective probability of a binary event, even under Subjective Expected Utility. To address this one can “calibrate” inferences about true subjective probabilities from elicited subjective probabilities over binary events, recognizing the incentives that risk averse agents have...
Polarization Mode Dispersion Probability Distribution for Arbitrary Mode Coupling
无
2003-01-01
The probability distribution of the differential group delay for arbitrary mode coupling is simulated with Monte-Carlo method. Fitting the simulation results, we obtain probability distribution function for arbitrary mode coupling.
Matrix-exponential distributions in applied probability
Bladt, Mogens
2017-01-01
This book contains an in-depth treatment of matrix-exponential (ME) distributions and their sub-class of phase-type (PH) distributions. Loosely speaking, an ME distribution is obtained through replacing the intensity parameter in an exponential distribution by a matrix. The ME distributions can also be identified as the class of non-negative distributions with rational Laplace transforms. If the matrix has the structure of a sub-intensity matrix for a Markov jump process we obtain a PH distribution which allows for nice probabilistic interpretations facilitating the derivation of exact solutions and closed form formulas. The full potential of ME and PH unfolds in their use in stochastic modelling. Several chapters on generic applications, like renewal theory, random walks and regenerative processes, are included together with some specific examples from queueing theory and insurance risk. We emphasize our intention towards applications by including an extensive treatment on statistical methods for PH distribu...
Some New Approaches to Multivariate Probability Distributions.
1986-12-01
Forte, B. (1985). Mutual dependence of random variables and maximum discretized entropy , Ann. Prob., 13, 630-637. .. 3. Billingsley, P. (1968...characterizations of distributions, such as the Marshall-Olkin bivariate distribution or Frechet’s multi- variate distribution with continuous marginals or a...problem mentioned in Remark 8. He has given in this context a uniqueness theorem in the bivariate case under certain assump- tions. The following
Foundations of quantization for probability distributions
Graf, Siegfried
2000-01-01
Due to the rapidly increasing need for methods of data compression, quantization has become a flourishing field in signal and image processing and information theory. The same techniques are also used in statistics (cluster analysis), pattern recognition, and operations research (optimal location of service centers). The book gives the first mathematically rigorous account of the fundamental theory underlying these applications. The emphasis is on the asymptotics of quantization errors for absolutely continuous and special classes of singular probabilities (surface measures, self-similar measures) presenting some new results for the first time. Written for researchers and graduate students in probability theory the monograph is of potential interest to all people working in the disciplines mentioned above.
Eliciting Subjective Probability Distributions with Binary Lotteries
Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd;
2015-01-01
We test in a laboratory experiment the theoretical prediction that risk attitudes have a surprisingly small role in distorting reports from true belief distributions. We find evidence consistent with theory in our experiment.......We test in a laboratory experiment the theoretical prediction that risk attitudes have a surprisingly small role in distorting reports from true belief distributions. We find evidence consistent with theory in our experiment....
Probability distribution fitting of schedule overruns in construction projects
P E D Love; C-P Sing; WANG, X; Edwards, D.J.; H Odeyinka
2013-01-01
The probability of schedule overruns for construction and engineering projects can be ascertained using a ‘best fit’ probability distribution from an empirical distribution. The statistical characteristics of schedule overruns occurring in 276 Australian construction and engineering projects were analysed. Skewness and kurtosis values revealed that schedule overruns are non-Gaussian. Theoretical probability distributions were then fitted to the schedule overrun data; including the Kolmogorov–...
Scoring Rules for Subjective Probability Distributions
Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd
2017-01-01
Subjective beliefs are elicited routinely in economics experiments. However, such elicitation often suffers from two possible disadvantages. First, beliefs are recovered in the form of a summary statistic, usually the mean, of the underlying latent distribution. Second, recovered beliefs are bias...
NONPARAMETRIC ESTIMATION OF CHARACTERISTICS OF PROBABILITY DISTRIBUTIONS
Orlov A. I.
2015-10-01
Full Text Available The article is devoted to the nonparametric point and interval estimation of the characteristics of the probabilistic distribution (the expectation, median, variance, standard deviation, variation coefficient of the sample results. Sample values are regarded as the implementation of independent and identically distributed random variables with an arbitrary distribution function having the desired number of moments. Nonparametric analysis procedures are compared with the parametric procedures, based on the assumption that the sample values have a normal distribution. Point estimators are constructed in the obvious way - using sample analogs of the theoretical characteristics. Interval estimators are based on asymptotic normality of sample moments and functions from them. Nonparametric asymptotic confidence intervals are obtained through the use of special output technology of the asymptotic relations of Applied Statistics. In the first step this technology uses the multidimensional central limit theorem, applied to the sums of vectors whose coordinates are the degrees of initial random variables. The second step is the conversion limit multivariate normal vector to obtain the interest of researcher vector. At the same considerations we have used linearization and discarded infinitesimal quantities. The third step - a rigorous justification of the results on the asymptotic standard for mathematical and statistical reasoning level. It is usually necessary to use the necessary and sufficient conditions for the inheritance of convergence. This article contains 10 numerical examples. Initial data - information about an operating time of 50 cutting tools to the limit state. Using the methods developed on the assumption of normal distribution, it can lead to noticeably distorted conclusions in a situation where the normality hypothesis failed. Practical recommendations are: for the analysis of real data we should use nonparametric confidence limits
Stable Probability Distributions and their Domains of Attraction
J.L. Geluk (Jaap); L.F.M. de Haan (Laurens)
1997-01-01
textabstractThe theory of stable probability distributions and their domains of attraction is derived in a direct way (avoiding the usual route via infinitely divisible distributions) using Fourier transforms. Regularly varying functions play an important role in the exposition.
Semi-stable distributions in free probability theory
无
2006-01-01
Semi-stable distributions, in classical probability theory, are characterized as limiting distributions of subsequences of normalized partial sums of independent and identically distributed random variables. We establish the noncommutative counterpart of semi-stable distributions. We study the characterization of noncommutative semi-stability through free cumulant transform and develop the free semi-stability and domain of semi-stable attraction in free probability theory.
Incorporating Skew into RMS Surface Roughness Probability Distribution
Stahl, Mark T.; Stahl, H. Philip.
2013-01-01
The standard treatment of RMS surface roughness data is the application of a Gaussian probability distribution. This handling of surface roughness ignores the skew present in the surface and overestimates the most probable RMS of the surface, the mode. Using experimental data we confirm the Gaussian distribution overestimates the mode and application of an asymmetric distribution provides a better fit. Implementing the proposed asymmetric distribution into the optical manufacturing process would reduce the polishing time required to meet surface roughness specifications.
Probability distributions in risk management operations
Artikis, Constantinos
2015-01-01
This book is about the formulations, theoretical investigations, and practical applications of new stochastic models for fundamental concepts and operations of the discipline of risk management. It also examines how these models can be useful in the descriptions, measurements, evaluations, and treatments of risks threatening various modern organizations. Moreover, the book makes clear that such stochastic models constitute very strong analytical tools which substantially facilitate strategic thinking and strategic decision making in many significant areas of risk management. In particular the incorporation of fundamental probabilistic concepts such as the sum, minimum, and maximum of a random number of continuous, positive, independent, and identically distributed random variables in the mathematical structure of stochastic models significantly supports the suitability of these models in the developments, investigations, selections, and implementations of proactive and reactive risk management operations. The...
Some explicit expressions for the probability distribution of force magnitude
Saralees Nadarajah
2008-08-01
Recently, empirical investigations have suggested that the components of contact forces follow the exponential distribution. However, explicit expressions for the probability distribution of the corresponding force magnitude have not been known and only approximations have been used in the literature. In this note, for the ﬁrst time, I provide explicit expressions for the probability distribution of the force magnitude. Both two-dimensional and three-dimensional cases are considered.
The estimation of tree posterior probabilities using conditional clade probability distributions.
Larget, Bret
2013-07-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample.
How Can Histograms Be Useful for Introducing Continuous Probability Distributions?
Derouet, Charlotte; Parzysz, Bernard
2016-01-01
The teaching of probability has changed a great deal since the end of the last century. The development of technologies is indeed part of this evolution. In France, continuous probability distributions began to be studied in 2002 by scientific 12th graders, but this subject was marginal and appeared only as an application of integral calculus.…
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
Hamadameen, Abdulqader Othman [Optimization, Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia); Zainuddin, Zaitul Marlizawati [Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia)
2014-06-19
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.
Information-theoretic methods for estimating of complicated probability distributions
Zong, Zhi
2006-01-01
Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur
Application-dependent Probability Distributions for Offshore Wind Speeds
Morgan, E. C.; Lackner, M.; Vogel, R. M.; Baise, L. G.
2010-12-01
The higher wind speeds of the offshore environment make it an attractive setting for future wind farms. With sparser field measurements, the theoretical probability distribution of short-term wind speeds becomes more important in estimating values such as average power output and fatigue load. While previous studies typically compare the accuracy of probability distributions using R2, we show that validation based on this metric is not consistent with validation based on engineering parameters of interest, namely turbine power output and extreme wind speed. Thus, in order to make the most accurate estimates possible, the probability distribution that an engineer picks to characterize wind speeds should depend on the design parameter of interest. We introduce the Kappa and Wakeby probability distribution functions to wind speed modeling, and show that these two distributions, along with the Biweibull distribution, fit wind speed samples better than the more widely accepted Weibull and Rayleigh distributions based on R2. Additionally, out of the 14 probability distributions we examine, the Kappa and Wakeby give the most accurate and least biased estimates of turbine power output. The fact that the 2-parameter Lognormal distribution estimates extreme wind speeds (i.e. fits the upper tail of wind speed distributions) with least error indicates that not one single distribution performs satisfactorily for all applications. Our use of a large dataset composed of 178 buoys (totaling ~72 million 10-minute wind speed observations) makes these findings highly significant, both in terms of large sample size and broad geographical distribution across various wind regimes. Boxplots of R2 from the fit of each of the 14 distributions to the 178 boy wind speed samples. Distributions are ranked from left to right by ascending median R2, with the Biweibull having the closest median to 1.
Most probable degree distribution at fixed structural entropy
Ginestra Bianconi
2008-06-01
The structural entropy is the entropy of the ensemble of uncorrelated networks with given degree sequence. Here we derive the most probable degree distribution emerging when we distribute stubs (or half-edges) randomly through the nodes of the network by keeping fixed the structural entropy. This degree distribution is found to decay as a Poisson distribution when the entropy is maximized and to have a power-law tail with an exponent → 2 when the entropy is minimized.
PROBABILITY DISTRIBUTION FUNCTION OF NEAR-WALL TURBULENT VELOCITY FLUCTUATIONS
无
2005-01-01
By large eddy simulation (LES), turbulent databases of channel flows at different Reynolds numbers were established. Then, the probability distribution functions of the streamwise and wall-normal velocity fluctuations were obtained and compared with the corresponding normal distributions. By hypothesis test, the deviation from the normal distribution was analyzed quantitatively. The skewness and flatness factors were also calculated. And the variations of these two factors in the viscous sublayer, buffer layer and log-law layer were discussed. Still illustrated were the relations between the probability distribution functions and the burst events-sweep of high-speed fluids and ejection of low-speed fluids-in the viscous sub-layer, buffer layer and loglaw layer. Finally the variations of the probability distribution functions with Reynolds number were examined.
Generating Probability Distributions using Multivalued Stochastic Relay Circuits
Lee, David
2011-01-01
The problem of random number generation dates back to von Neumann's work in 1951. Since then, many algorithms have been developed for generating unbiased bits from complex correlated sources as well as for generating arbitrary distributions from unbiased bits. An equally interesting, but less studied aspect is the structural component of random number generation as opposed to the algorithmic aspect. That is, given a network structure imposed by nature or physical devices, how can we build networks that generate arbitrary probability distributions in an optimal way? In this paper, we study the generation of arbitrary probability distributions in multivalued relay circuits, a generalization in which relays can take on any of N states and the logical 'and' and 'or' are replaced with 'min' and 'max' respectively. Previous work was done on two-state relays. We generalize these results, describing a duality property and networks that generate arbitrary rational probability distributions. We prove that these network...
Evidence for Truncated Exponential Probability Distribution of Earthquake Slip
Thingbaijam, Kiran K. S.
2016-07-13
Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.
NORMALLY DISTRIBUTED PROBABILITY MEASURE ON THE METRIC SPACE OF NORMS
Á.G. HORVÁTH
2013-01-01
In this paper we propose a method to construct probability measures on the space of convex bodies. For this purpose, first, we introduce the notion of thinness of a body. Then we show the existence of a measure with the property that its pushforward by the thinness function is a probability measure of truncated normal distribution. Finally, we improve this method to find a measure satisfying some important properties in geometric measure theory.
Probability distributions for Poisson processes with pile-up
Sevilla, Diego J R
2013-01-01
In this paper, two parametric probability distributions capable to describe the statistics of X-ray photon detection by a CCD are presented. They are formulated from simple models that account for the pile-up phenomenon, in which two or more photons are counted as one. These models are based on the Poisson process, but they have an extra parameter which includes all the detailed mechanisms of the pile-up process that must be fitted to the data statistics simultaneously with the rate parameter. The new probability distributions, one for number of counts per time bins (Poisson-like), and the other for waiting times (exponential-like) are tested fitting them to statistics of real data, and between them through numerical simulations, and their results are analyzed and compared. The probability distributions presented here can be used as background statistical models to derive likelihood functions for statistical methods in signal analysis.
Probability distribution functions in the finite density lattice QCD
Ejiri, S; Aoki, S; Kanaya, K; Saito, H; Hatsuda, T; Ohno, H; Umeda, T
2012-01-01
We study the phase structure of QCD at high temperature and density by lattice QCD simulations adopting a histogram method. We try to solve the problems which arise in the numerical study of the finite density QCD, focusing on the probability distribution function (histogram). As a first step, we investigate the quark mass dependence and the chemical potential dependence of the probability distribution function as a function of the Polyakov loop when all quark masses are sufficiently large, and study the properties of the distribution function. The effect from the complex phase of the quark determinant is estimated explicitly. The shape of the distribution function changes with the quark mass and the chemical potential. Through the shape of the distribution, the critical surface which separates the first order transition and crossover regions in the heavy quark region is determined for the 2+1-flavor case.
Assigning probability distributions to input parameters of performance assessment models
Mishra, Srikanta [INTERA Inc., Austin, TX (United States)
2002-02-01
This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.
Probability distribution of arrival times in quantum mechanics
Delgado, V
1998-01-01
In a previous paper [Phys. Rev. A, in press] we introduced a self-adjoint operator $\\hat {{\\cal T}}(X)$ whose eigenstates can be used to define consistently a probability distribution of the time of arrival at a given spatial point. In the present work we show that the probability distribution previously proposed can be well understood on classical grounds in the sense that it is given by the expectation value of a certain positive definite operator $\\hat J^{(+)}(X)$ which is nothing but a straightforward quantum version of the modulus of the classical current. For quantum states highly localized in momentum space about a certain momentum $p_0 \
Parametric Probability Distribution Functions for Axon Diameters of Corpus Callosum
Farshid eSepehrband
2016-05-01
Full Text Available Axon diameter is an important neuroanatomical characteristic of the nervous system that alters in the course of neurological disorders such as multiple sclerosis. Axon diameters vary, even within a fiber bundle, and are not normally distributed. An accurate distribution function is therefore beneficial, either to describe axon diameters that are obtained from a direct measurement technique (e.g., microscopy, or to infer them indirectly (e.g., using diffusion-weighted MRI. The gamma distribution is a common choice for this purpose (particularly for the inferential approach because it resembles the distribution profile of measured axon diameters which has been consistently shown to be non-negative and right-skewed. In this study we compared a wide range of parametric probability distribution functions against empirical data obtained from electron microscopy images. We observed that the gamma distribution fails to accurately describe the main characteristics of the axon diameter distribution, such as location and scale of the mode and the profile of distribution tails. We also found that the generalized extreme value distribution consistently fitted the measured distribution better than other distribution functions. This suggests that there may be distinct subpopulations of axons in the corpus callosum, each with their own distribution profiles. In addition, we observed that several other distributions outperformed the gamma distribution, yet had the same number of unknown parameters; these were the inverse Gaussian, log normal, log logistic and Birnbaum-Saunders distributions.
Do Biases in Probability Judgment Matter in Markets? Experimental Evidence
1987-01-01
Microeconomic theory typically concerns exchange between individuals or firms in a market setting. To make predictions precise, individuals are usually assumed to use the laws of probability in structuring and revising beliefs about uncertainties. Recent evidence, mostly gathered by psychologists, suggests probability theories might be inadequate descriptive models of individual choice. (See the books edited by Daniel Kahneman et al., 1982a, and by Hal Arkes and ...
Probability distributions of the electroencephalogram envelope of preterm infants.
Saji, Ryoya; Hirasawa, Kyoko; Ito, Masako; Kusuda, Satoshi; Konishi, Yukuo; Taga, Gentaro
2015-06-01
To determine the stationary characteristics of electroencephalogram (EEG) envelopes for prematurely born (preterm) infants and investigate the intrinsic characteristics of early brain development in preterm infants. Twenty neurologically normal sets of EEGs recorded in infants with a post-conceptional age (PCA) range of 26-44 weeks (mean 37.5 ± 5.0 weeks) were analyzed. Hilbert transform was applied to extract the envelope. We determined the suitable probability distribution of the envelope and performed a statistical analysis. It was found that (i) the probability distributions for preterm EEG envelopes were best fitted by lognormal distributions at 38 weeks PCA or less, and by gamma distributions at 44 weeks PCA; (ii) the scale parameter of the lognormal distribution had positive correlations with PCA as well as a strong negative correlation with the percentage of low-voltage activity; (iii) the shape parameter of the lognormal distribution had significant positive correlations with PCA; (iv) the statistics of mode showed significant linear relationships with PCA, and, therefore, it was considered a useful index in PCA prediction. These statistics, including the scale parameter of the lognormal distribution and the skewness and mode derived from a suitable probability distribution, may be good indexes for estimating stationary nature in developing brain activity in preterm infants. The stationary characteristics, such as discontinuity, asymmetry, and unimodality, of preterm EEGs are well indicated by the statistics estimated from the probability distribution of the preterm EEG envelopes. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Augmenting momentum resolution with well tuned probability distributions
Landi, Gregorio
2016-01-01
The realistic probability distributions of a previous article are applied to the reconstruction of tracks in constant magnetic field. The complete forms and their schematic approximations produce excellent momentum estimations, drastically better than standard fits. A simplified derivation of one of our probability distributions is illustrated. The momentum reconstructions are compared with standard fits (least squares) with two different position algorithms: the eta-algorithm and the two-strip center of gravity. The quality of our results are expressed as the increase of the magnetic field and signal-to-noise ratio that overlap the standard fit reconstructions with ours best distributions. The data and the simulations are tuned on the tracker of a running experiment and its double sided microstrip detectors, here each detector side is simulated to measure the magnetic bending. To overlap with our best distributions, the magnetic field must be increased by a factor 1.5 for the least squares based on the eta-a...
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.H.
1980-01-01
Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Probability distribution of extreme share returns in Malaysia
Zin, Wan Zawiah Wan; Safari, Muhammad Aslam Mohd; Jaaman, Saiful Hafizah; Yie, Wendy Ling Shin
2014-09-01
The objective of this study is to investigate the suitable probability distribution to model the extreme share returns in Malaysia. To achieve this, weekly and monthly maximum daily share returns are derived from share prices data obtained from Bursa Malaysia over the period of 2000 to 2012. The study starts with summary statistics of the data which will provide a clue on the likely candidates for the best fitting distribution. Next, the suitability of six extreme value distributions, namely the Gumbel, Generalized Extreme Value (GEV), Generalized Logistic (GLO) and Generalized Pareto (GPA), the Lognormal (GNO) and the Pearson (PE3) distributions are evaluated. The method of L-moments is used in parameter estimation. Based on several goodness of fit tests and L-moment diagram test, the Generalized Pareto distribution and the Pearson distribution are found to be the best fitted distribution to represent the weekly and monthly maximum share returns in Malaysia stock market during the studied period, respectively.
Log-concave Probability Distributions: Theory and Statistical Testing
An, Mark Yuing
1996-01-01
This paper studies the broad class of log-concave probability distributions that arise in economics of uncertainty and information. For univariate, continuous, and log-concave random variables we prove useful properties without imposing the differentiability of density functions. Discrete and mul...
Probability Measure of Navigation pattern predition using Poisson Distribution Analysis
Dr.V.Valli Mayil
2012-06-01
Full Text Available The World Wide Web has become one of the most important media to store, share and distribute information. The rapid expansion of the web has provided a great opportunity to study user and system behavior by exploring web access logs. Web Usage Mining is the application of data mining techniques to large web data repositories in order to extract usage patterns. Every web server keeps a log of all transactions between the server and the clients. The log data which are collected by web servers contains information about every click of user to the web documents of the site. The useful log information needs to be analyzed and interpreted in order to obtainknowledge about actual user preferences in accessing web pages. In recent years several methods have been proposed for mining web log data. This paper addresses the statistical method of Poisson distribution analysis to find out the higher probability session sequences which is then used to test the web application performance.The analysis of large volumes of click stream data demands the employment of data mining methods. Conducting data mining on logs of web servers involves the determination of frequently occurring access sequences. A statistical poisson distribution shows the frequency probability of specific events when the average probability of a single occurrence is known. The Poisson distribution is a discrete function wich is used in this paper to find out the probability frequency of particular page is visited by the user.
On Probability Distributions for Trees: Representations, Inference and Learning
Denis, François; Gilleron, Rémi; Tommasi, Marc; Gilbert, Édouard
2008-01-01
We study probability distributions over free algebras of trees. Probability distributions can be seen as particular (formal power) tree series [Berstel et al 82, Esik et al 03], i.e. mappings from trees to a semiring K . A widely studied class of tree series is the class of rational (or recognizable) tree series which can be defined either in an algebraic way or by means of multiplicity tree automata. We argue that the algebraic representation is very convenient to model probability distributions over a free algebra of trees. First, as in the string case, the algebraic representation allows to design learning algorithms for the whole class of probability distributions defined by rational tree series. Note that learning algorithms for rational tree series correspond to learning algorithms for weighted tree automata where both the structure and the weights are learned. Second, the algebraic representation can be easily extended to deal with unranked trees (like XML trees where a symbol may have an unbounded num...
Probability distributions of continuous measurement results for conditioned quantum evolution
Franquet, A.; Nazarov, Yuli V.
2017-02-01
We address the statistics of continuous weak linear measurement on a few-state quantum system that is subject to a conditioned quantum evolution. For a conditioned evolution, both the initial and final states of the system are fixed: the latter is achieved by the postselection in the end of the evolution. The statistics may drastically differ from the nonconditioned case, and the interference between initial and final states can be observed in the probability distributions of measurement outcomes as well as in the average values exceeding the conventional range of nonconditioned averages. We develop a proper formalism to compute the distributions of measurement outcomes, and evaluate and discuss the distributions in experimentally relevant setups. We demonstrate the manifestations of the interference between initial and final states in various regimes. We consider analytically simple examples of nontrivial probability distributions. We reveal peaks (or dips) at half-quantized values of the measurement outputs. We discuss in detail the case of zero overlap between initial and final states demonstrating anomalously big average outputs and sudden jump in time-integrated output. We present and discuss the numerical evaluation of the probability distribution aiming at extending the analytical results and describing a realistic experimental situation of a qubit in the regime of resonant fluorescence.
Convolutions Induced Discrete Probability Distributions and a New Fibonacci Constant
Rajan, Arulalan; Rao, Vittal; Rao, Ashok
2010-01-01
This paper proposes another constant that can be associated with Fibonacci sequence. In this work, we look at the probability distributions generated by the linear convolution of Fibonacci sequence with itself, and the linear convolution of symmetrized Fibonacci sequence with itself. We observe that for a distribution generated by the linear convolution of the standard Fibonacci sequence with itself, the variance converges to 8.4721359... . Also, for a distribution generated by the linear convolution of symmetrized Fibonacci sequences, the variance converges in an average sense to 17.1942 ..., which is approximately twice that we get with common Fibonacci sequence.
Measuring Robustness of Timetables at Stations using a Probability Distribution
Jensen, Lars Wittrup; Landex, Alex
Stations are often the limiting capacity factor in a railway network. This induces interdependencies, especially at at-grade junctions, causing network effects. This paper presents three traditional methods that can be used to measure the complexity of a station, indicating the robustness...... infrastructure layouts given a timetable. These two methods provide different precision at the expense of a more complex calculation process. The advanced and more precise method is based on a probability distribution that can describe the expected delay between two trains as a function of the buffer time....... This paper proposes to use the exponential distribution, only taking non-negative delays into account, but any probability distribution can be used. Furthermore, the paper proposes that the calculation parameters are estimated from existing delay data, at a station, to achieve a higher precision. As delay...
Probability Distribution Function of Passive Scalars in Shell Models
LIU Chun-Ping; ZHANG Xiao-Qiang; LIU Yu-Rong; WANG Guang-Rui; HE Da-Ren; CHEN Shi-Gang; ZHU Lu-Jin
2008-01-01
A shell-model version of passive scalar problem is introduced, which is inspired by the model of K. Ohkitani and M. Yakhot [K. Ohkitani and M. Yakhot, Phys. Rev. Lett. 60 (1988) 983; K. Ohkitani and M. Yakhot, Prog. Theor. Phys. 81 (1988) 329]. As in the original problem, the prescribed random velocity field is Gaussian and 5 correlated in time. Deterministic differential equations are regarded as nonlinear Langevin equation. Then, the Fokker-Planck equations of PDF for passive scalars axe obtained and solved numerically. In energy input range (n < 5, n is the shell number.), the probability distribution function (PDF) of passive scalars is near the Gaussian distribution. In inertial range (5 < n < 16) and dissipation range (n ≥ 17), the probability distribution function (PDF) of passive scalars has obvious intermittence. And the scaling power of passive scalar is anomalous. The results of numerical simulations are compared with experimental measurements.
Distribution probability of large-scale landslides in central Nepal
Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi
2014-12-01
Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.
Polynomial probability distribution estimation using the method of moments.
Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.
The probability distribution model of air pollution index and its dominants in Kuala Lumpur
AL-Dhurafi, Nasr Ahmed; Razali, Ahmad Mahir; Masseran, Nurulkamal; Zamzuri, Zamira Hasanah
2016-11-01
This paper focuses on the statistical modeling for the distributions of air pollution index (API) and its sub-indexes data observed at Kuala Lumpur in Malaysia. Five pollutants or sub-indexes are measured including, carbon monoxide (CO); sulphur dioxide (SO2); nitrogen dioxide (NO2), and; particulate matter (PM10). Four probability distributions are considered, namely log-normal, exponential, Gamma and Weibull in search for the best fit distribution to the Malaysian air pollutants data. In order to determine the best distribution for describing the air pollutants data, five goodness-of-fit criteria's are applied. This will help in minimizing the uncertainty in pollution resource estimates and improving the assessment phase of planning. The conflict in criterion results for selecting the best distribution was overcome by using the weight of ranks method. We found that the Gamma distribution is the best distribution for the majority of air pollutants data in Kuala Lumpur.
Unitary equilibrations: probability distribution of the Loschmidt echo
Venuti, Lorenzo Campos
2009-01-01
Closed quantum systems evolve unitarily and therefore cannot converge in a strong sense to an equilibrium state starting out from a generic pure state. Nevertheless for large system size one observes temporal typicality. Namely, for the overwhelming majority of the time instants, the statistics of observables is practically indistinguishable from an effective equilibrium one. In this paper we consider the Loschmidt echo (LE) to study this sort of unitary equilibration after a quench. We draw several conclusions on general grounds and on the basis of an exactly-solvable example of a quasi-free system. In particular we focus on the whole probability distribution of observing a given value of the LE after waiting a long time. Depending on the interplay between the initial state and the quench Hamiltonian, we find different regimes reflecting different equilibration dynamics. When the perturbation is small and the system is away from criticality the probability distribution is Gaussian. However close to criticali...
Measurement of probability distributions for internal stresses in dislocated crystals
Wilkinson, Angus J.; Tarleton, Edmund; Vilalta-Clemente, Arantxa; Collins, David M. [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Jiang, Jun; Britton, T. Benjamin [Department of Materials, Imperial College London, Royal School of Mines, Exhibition Road, London SW7 2AZ (United Kingdom)
2014-11-03
Here, we analyse residual stress distributions obtained from various crystal systems using high resolution electron backscatter diffraction (EBSD) measurements. Histograms showing stress probability distributions exhibit tails extending to very high stress levels. We demonstrate that these extreme stress values are consistent with the functional form that should be expected for dislocated crystals. Analysis initially developed by Groma and co-workers for X-ray line profile analysis and based on the so-called “restricted second moment of the probability distribution” can be used to estimate the total dislocation density. The generality of the results are illustrated by application to three quite different systems, namely, face centred cubic Cu deformed in uniaxial tension, a body centred cubic steel deformed to larger strain by cold rolling, and hexagonal InAlN layers grown on misfitting sapphire and silicon carbide substrates.
Fibonacci Sequence, Recurrence Relations, Discrete Probability Distributions and Linear Convolution
Rajan, Arulalan; Rao, Ashok; Jamadagni, H S
2012-01-01
The classical Fibonacci sequence is known to exhibit many fascinating properties. In this paper, we explore the Fibonacci sequence and integer sequences generated by second order linear recurrence relations with positive integer coe?cients from the point of view of probability distributions that they induce. We obtain the generalizations of some of the known limiting properties of these probability distributions and present certain optimal properties of the classical Fibonacci sequence in this context. In addition, we also look at the self linear convolution of linear recurrence relations with positive integer coefficients. Analysis of self linear convolution is focused towards locating the maximum in the resulting sequence. This analysis, also highlights the influence that the largest positive real root, of the "characteristic equation" of the linear recurrence relations with positive integer coefficients, has on the location of the maximum. In particular, when the largest positive real root is 2,the locatio...
Outage probability of distributed beamforming with co-channel interference
Yang, Liang
2012-03-01
In this letter, we consider a distributed beamforming scheme (DBF) in the presence of equal-power co-channel interferers for both amplify-and-forward and decode-and-forward relaying protocols over Rayleigh fading channels. We first derive outage probability expressions for the DBF systems. We then present a performance analysis for a scheme relying on source selection. Numerical results are finally presented to verify our analysis. © 2011 IEEE.
Cosmological constraints from the convergence 1-point probability distribution
Patton, Kenneth; Blazek, Jonathan; Honscheid, Klaus; Huff, Eric; Melchior, Peter; Ross, Ashley J.; Suchyta, Eric
2016-01-01
We examine the cosmological information available from the 1-point probability distribution (PDF) of the weak-lensing convergence field, utilizing fast L-PICOLA simulations and a Fisher analysis. We find competitive constraints in the $\\Omega_m$-$\\sigma_8$ plane from the convergence PDF with $188\\ arcmin^2$ pixels compared to the cosmic shear power spectrum with an equivalent number of modes ($\\ell < 886$). The convergence PDF also partially breaks the degeneracy cosmic shear exhibits in that...
Testing for the maximum cell probabilities in multinomial distributions
XIONG Shifeng; LI Guoying
2005-01-01
This paper investigates one-sided hypotheses testing for p[1], the largest cell probability of multinomial distribution. A small sample test of Ethier (1982) is extended to the general cases. Based on an estimator of p[1], a kind of large sample tests is proposed. The asymptotic power of the above tests under local alternatives is derived. An example is presented at the end of this paper.
Estimating probable flaw distributions in PWR steam generator tubes
Gorman, J.A.; Turner, A.P.L. [Dominion Engineering, Inc., McLean, VA (United States)
1997-02-01
This paper describes methods for estimating the number and size distributions of flaws of various types in PWR steam generator tubes. These estimates are needed when calculating the probable primary to secondary leakage through steam generator tubes under postulated accidents such as severe core accidents and steam line breaks. The paper describes methods for two types of predictions: (1) the numbers of tubes with detectable flaws of various types as a function of time, and (2) the distributions in size of these flaws. Results are provided for hypothetical severely affected, moderately affected and lightly affected units. Discussion is provided regarding uncertainties and assumptions in the data and analyses.
Steady-state distributions of probability fluxes on complex networks
Chełminiak, Przemysław; Kurzyński, Michał
2017-02-01
We consider a simple model of the Markovian stochastic dynamics on complex networks to examine the statistical properties of the probability fluxes. The additional transition, called hereafter a gate, powered by the external constant force breaks a detailed balance in the network. We argue, using a theoretical approach and numerical simulations, that the stationary distributions of the probability fluxes emergent under such conditions converge to the Gaussian distribution. By virtue of the stationary fluctuation theorem, its standard deviation depends directly on the square root of the mean flux. In turn, the nonlinear relation between the mean flux and the external force, which provides the key result of the present study, allows us to calculate the two parameters that entirely characterize the Gaussian distribution of the probability fluxes both close to as well as far from the equilibrium state. Also, the other effects that modify these parameters, such as the addition of shortcuts to the tree-like network, the extension and configuration of the gate and a change in the network size studied by means of computer simulations are widely discussed in terms of the rigorous theoretical predictions.
The Probability Distribution Model of Wind Speed over East Malaysia
Nurulkamal Masseran
2013-07-01
Full Text Available Many studies have found that wind speed is the most significant parameter of wind power. Thus, an accurate determination of the probability distribution of wind speed is an important parameter to measure before estimating the wind energy potential over a particular region. Utilizing an accurate distribution will minimize the uncertainty in wind resource estimates and improve the site assessment phase of planning. In general, different regions have different wind regimes. Hence, it is reasonable that different wind distributions will be found for different regions. Because it is reasonable to consider that wind regimes vary according to the region of a particular country, nine different statistical distributions have been fitted to the mean hourly wind speed data from 20 wind stations in East Malaysia, for the period from 2000 to 2009. The values from Kolmogorov-Smirnov statistic, Akaike’s Information Criteria, Bayesian Information Criteria and R2 correlation coefficient were compared with the distributions to determine the best fit for describing the observed data. A good fit for most of the stations in East Malaysia was found using the Gamma and Burr distributions, though there was no clear pattern observed for all regions in East Malaysia. However, the Gamma distribution was a clear fit to the data from all stations in southern Sabah.
Electromagnetic force distribution inside matter
Mansuripur, Masud; Wright, Ewan M
2013-01-01
Using the Finite Difference Time Domain method, we solve Maxwell's equations numerically and compute the distribution of electromagnetic fields and forces inside material media. The media are generally specified by their dielectric permittivity epsilon(w) and magnetic permeability mu(w), representing small, transparent dielectric and magnetic objects such as platelets and micro-beads. Using two formulations of the electromagnetic force-density, one due to H. A. Lorentz [Collected Papers 2, 164 (1892)], the other due to A. Einstein and J. Laub [Ann, Phys. 331, 541 (1908)], we show that the force-density distribution inside a given object can differ substantially between the two formulations. This is remarkable, considering that the total force experienced by the object is always the same, irrespective of whether the Lorentz or the Einstein-Laub formula is employed. The differences between the two formulations should be accessible to measurement in deformable objects.
Research on probability distribution of port cargo throughput
SUN Liang; TAN De-rong
2008-01-01
In order to more accurately examine developing trends in gross cargo throughput, we have modeled the probability distribution of cargo throughput. Gross cargo throughput is determined by the time spent by cargo ships in the port and the operating efficiency of handling equipment. Gross cargo throughput is the sum of all compound variables determining each aspect of cargo throughput for every cargo ship arriving at the port. Probability distribution was determined using the Wald equation. The results show that the variability of gross cargo throughput primarily depends on the different times required by different cargo ships arriving at the port. This model overcomes the shortcoming of previous models: inability to accurately determine the probability of a specific value of future gross cargo throughput. Our proposed model of cargo throughput depends on the relationship between time required by a cargo ship arriving at the port and the operational capacity of handling equipment at the port. At the same time, key factors affecting gross cargo throughput are analyzed. In order to test the efficiency of the model, the cargo volume of a port in Shandong Province was used as an example. In the case study the actual results matched our theoretical analysis.
Methods for fitting a parametric probability distribution to most probable number data.
Williams, Michael S; Ebel, Eric D
2012-07-01
Every year hundreds of thousands, if not millions, of samples are collected and analyzed to assess microbial contamination in food and water. The concentration of pathogenic organisms at the end of the production process is low for most commodities, so a highly sensitive screening test is used to determine whether the organism of interest is present in a sample. In some applications, samples that test positive are subjected to quantitation. The most probable number (MPN) technique is a common method to quantify the level of contamination in a sample because it is able to provide estimates at low concentrations. This technique uses a series of dilution count experiments to derive estimates of the concentration of the microorganism of interest. An application for these data is food-safety risk assessment, where the MPN concentration estimates can be fitted to a parametric distribution to summarize the range of potential exposures to the contaminant. Many different methods (e.g., substitution methods, maximum likelihood and regression on order statistics) have been proposed to fit microbial contamination data to a distribution, but the development of these methods rarely considers how the MPN technique influences the choice of distribution function and fitting method. An often overlooked aspect when applying these methods is whether the data represent actual measurements of the average concentration of microorganism per milliliter or the data are real-valued estimates of the average concentration, as is the case with MPN data. In this study, we propose two methods for fitting MPN data to a probability distribution. The first method uses a maximum likelihood estimator that takes average concentration values as the data inputs. The second is a Bayesian latent variable method that uses the counts of the number of positive tubes at each dilution to estimate the parameters of the contamination distribution. The performance of the two fitting methods is compared for two
Phase diagram of epidemic spreading - unimodal vs. bimodal probability distributions
Lancic, Alen; Sikic, Mile; Stefancic, Hrvoje
2009-01-01
The disease spreading on complex networks is studied in SIR model. Simulations on empirical complex networks reveal two specific regimes of disease spreading: local containment and epidemic outbreak. The variables measuring the extent of disease spreading are in general characterized by a bimodal probability distribution. Phase diagrams of disease spreading for empirical complex networks are introduced. A theoretical model of disease spreading on m-ary tree is investigated both analytically and in simulations. It is shown that the model reproduces qualitative features of phase diagrams of disease spreading observed in empirical complex networks. The role of tree-like structure of complex networks in disease spreading is discussed.
Log-concave Probability Distributions: Theory and Statistical Testing
An, Mark Yuing
1996-01-01
This paper studies the broad class of log-concave probability distributions that arise in economics of uncertainty and information. For univariate, continuous, and log-concave random variables we prove useful properties without imposing the differentiability of density functions. Discrete...... and multivariate distributions are also discussed. We propose simple non-parametric testing procedures for log-concavity. The test statistics are constructed to test one of the two implicati ons of log-concavity: increasing hazard rates and new-is-better-than-used (NBU) property. The test for increasing hazard...... rates are based on normalized spacing of the sample order statistics. The tests for NBU property fall into the category of Hoeffding's U-statistics...
Net-proton probability distribution in heavy ion collisions
Braun-Munzinger, P; Karsch, F; Redlich, K; Skokov, V
2011-01-01
We compute net-proton probability distributions in heavy ion collisions within the hadron resonance gas model. The model results are compared with data taken by the STAR Collaboration in Au-Au collisions at sqrt(s_{NN})= 200 GeV for different centralities. We show that in peripheral Au-Au collisions the measured distributions, and the resulting first four moments of net-proton fluctuations, are consistent with results obtained from the hadron resonance gas model. However, data taken in central Au-Au collisions differ from the predictions of the model. The observed deviations can not be attributed to uncertainties in model parameters. We discuss possible interpretations of the observed deviations.
Maximum-entropy probability distributions under Lp-norm constraints
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
Landslide Probability Assessment by the Derived Distributions Technique
Muñoz, E.; Ochoa, A.; Martínez, H.
2012-12-01
Landslides are potentially disastrous events that bring along human and economic losses; especially in cities where an accelerated and unorganized growth leads to settlements on steep and potentially unstable areas. Among the main causes of landslides are geological, geomorphological, geotechnical, climatological, hydrological conditions and anthropic intervention. This paper studies landslides detonated by rain, commonly known as "soil-slip", which characterize by having a superficial failure surface (Typically between 1 and 1.5 m deep) parallel to the slope face and being triggered by intense and/or sustained periods of rain. This type of landslides is caused by changes on the pore pressure produced by a decrease in the suction when a humid front enters, as a consequence of the infiltration initiated by rain and ruled by the hydraulic characteristics of the soil. Failure occurs when this front reaches a critical depth and the shear strength of the soil in not enough to guarantee the stability of the mass. Critical rainfall thresholds in combination with a slope stability model are widely used for assessing landslide probability. In this paper we present a model for the estimation of the occurrence of landslides based on the derived distributions technique. Since the works of Eagleson in the 1970s the derived distributions technique has been widely used in hydrology to estimate the probability of occurrence of extreme flows. The model estimates the probability density function (pdf) of the Factor of Safety (FOS) from the statistical behavior of the rainfall process and some slope parameters. The stochastic character of the rainfall is transformed by means of a deterministic failure model into FOS pdf. Exceedance probability and return period estimation is then straightforward. The rainfall process is modeled as a Rectangular Pulses Poisson Process (RPPP) with independent exponential pdf for mean intensity and duration of the storms. The Philip infiltration model
Soil architecture and distribution of organic matter
Kooistra, M.J.; Noordwijk, van M.
1996-01-01
The biological component of soil structure varies greatly in quality and quantity, occurs on different scales, and varies throughout the year. It is far less predictable than the physical part and human impact. The occurrence and distribution of organic matter depends on several processes, related t
Probability Distribution and Projected Trends of Daily Precipitation in China
CAO; Li-Ge; ZHONG; Jun; SU; Bu-Da; ZHAI; Jian-Qing; Macro; GEMMER
2013-01-01
Based on observed daily precipitation data of 540 stations and 3,839 gridded data from the high-resolution regional climate model COSMO-Climate Limited-area Modeling(CCLM)for 1961–2000,the simulation ability of CCLM on daily precipitation in China is examined,and the variation of daily precipitation distribution pattern is revealed.By applying the probability distribution and extreme value theory to the projected daily precipitation(2011–2050)under SRES A1B scenario with CCLM,trends of daily precipitation series and daily precipitation extremes are analyzed.Results show that except for the western Qinghai-Tibetan Plateau and South China,distribution patterns of the kurtosis and skewness calculated from the simulated and observed series are consistent with each other;their spatial correlation coefcients are above 0.75.The CCLM can well capture the distribution characteristics of daily precipitation over China.It is projected that in some parts of the Jianghuai region,central-eastern Northeast China and Inner Mongolia,the kurtosis and skewness will increase significantly,and precipitation extremes will increase during 2011–2050.The projected increase of maximum daily rainfall and longest non-precipitation period during flood season in the aforementioned regions,also show increasing trends of droughts and floods in the next 40 years.
Baer, P.; Mastrandrea, M.
2006-12-01
Simple probabilistic models which attempt to estimate likely transient temperature change from specified CO2 emissions scenarios must make assumptions about at least six uncertain aspects of the causal chain between emissions and temperature: current radiative forcing (including but not limited to aerosols), current land use emissions, carbon sinks, future non-CO2 forcing, ocean heat uptake, and climate sensitivity. Of these, multiple PDFs (probability density functions) have been published for the climate sensitivity, a couple for current forcing and ocean heat uptake, one for future non-CO2 forcing, and none for current land use emissions or carbon cycle uncertainty (which are interdependent). Different assumptions about these parameters, as well as different model structures, will lead to different estimates of likely temperature increase from the same emissions pathway. Thus policymakers will be faced with a range of temperature probability distributions for the same emissions scenarios, each described by a central tendency and spread. Because our conventional understanding of uncertainty and probability requires that a probabilistically defined variable of interest have only a single mean (or median, or modal) value and a well-defined spread, this "multidimensional" uncertainty defies straightforward utilization in policymaking. We suggest that there are no simple solutions to the questions raised. Crucially, we must dispel the notion that there is a "true" probability probabilities of this type are necessarily subjective, and reasonable people may disagree. Indeed, we suggest that what is at stake is precisely the question, what is it reasonable to believe, and to act as if we believe? As a preliminary suggestion, we demonstrate how the output of a simple probabilistic climate model might be evaluated regarding the reasonableness of the outputs it calculates with different input PDFs. We suggest further that where there is insufficient evidence to clearly
Non-Gaussian probability distributions of solar wind fluctuations
E. Marsch
Full Text Available The probability distributions of field differences ∆x(τ=x(t+τ-x(t, where the variable x(t may denote any solar wind scalar field or vector field component at time t, have been calculated from time series of Helios data obtained in 1976 at heliocentric distances near 0.3 AU. It is found that for comparatively long time lag τ, ranging from a few hours to 1 day, the differences are normally distributed according to a Gaussian. For shorter time lags, of less than ten minutes, significant changes in shape are observed. The distributions are often spikier and narrower than the equivalent Gaussian distribution with the same standard deviation, and they are enhanced for large, reduced for intermediate and enhanced for very small values of ∆x. This result is in accordance with fluid observations and numerical simulations. Hence statistical properties are dominated at small scale τ by large fluctuation amplitudes that are sparsely distributed, which is direct evidence for spatial intermittency of the fluctuations. This is in agreement with results from earlier analyses of the structure functions of ∆x. The non-Gaussian features are differently developed for the various types of fluctuations. The relevance of these observations to the interpretation and understanding of the nature of solar wind magnetohydrodynamic (MHD turbulence is pointed out, and contact is made with existing theoretical concepts of intermittency in fluid turbulence.
Subspace Learning via Local Probability Distribution for Hyperspectral Image Classification
Huiwu Luo
2015-01-01
Full Text Available The computational procedure of hyperspectral image (HSI is extremely complex, not only due to the high dimensional information, but also due to the highly correlated data structure. The need of effective processing and analyzing of HSI has met many difficulties. It has been evidenced that dimensionality reduction has been found to be a powerful tool for high dimensional data analysis. Local Fisher’s liner discriminant analysis (LFDA is an effective method to treat HSI processing. In this paper, a novel approach, called PD-LFDA, is proposed to overcome the weakness of LFDA. PD-LFDA emphasizes the probability distribution (PD in LFDA, where the maximum distance is replaced with local variance for the construction of weight matrix and the class prior probability is applied to compute the affinity matrix. The proposed approach increases the discriminant ability of the transformed features in low dimensional space. Experimental results on Indian Pines 1992 data indicate that the proposed approach significantly outperforms the traditional alternatives.
Some Useful Distributions and Probabilities for Cellular Networks
Yu, Seung Min
2011-01-01
The cellular network is one of the most useful networks for wireless communications and now universally used. There have been a lot of analytic results about the performance of the mobile user at a specific location such as the cell center or edge. On the other hand, there have been few analytic results about the performance of the mobile user at an arbitrary location. Moreover, to the best of our knowledge, there is no analytic result on the performance of the mobile user at an arbitrary location considering the mobile user density. In this paper, we use the stochastic geometry approach and derive useful distributions and probabilities for cellular networks. Using those, we analyze the performance of the mobile user, e.g., outage probability at an arbitrary location considering the mobile user density. Under some assumptions, those can be expressed by closed form formulas. Our analytic results will provide a fundamental framework for the performance analysis of cellular networks, which will significantly red...
Characterizing the \\lyaf\\ flux probability distribution function using Legendre polynomials
Cieplak, Agnieszka M
2016-01-01
The Lyman-$\\alpha$ forest is a highly non-linear field with a lot of information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polyonomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, $n$-th coefficient can be expressed as a linear combination of the first $n$ moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities.
Cosmological constraints from the convergence 1-point probability distribution
Patton, Kenneth; Honscheid, Klaus; Huff, Eric; Melchior, Peter; Ross, Ashley J; Suchyta, Eric
2016-01-01
We examine the cosmological information available from the 1-point probability distribution (PDF) of the weak-lensing convergence field, utilizing fast L-PICOLA simulations and a Fisher analysis. We find competitive constraints in the $\\Omega_m$-$\\sigma_8$ plane from the convergence PDF with $188\\ arcmin^2$ pixels compared to the cosmic shear power spectrum with an equivalent number of modes ($\\ell < 886$). The convergence PDF also partially breaks the degeneracy cosmic shear exhibits in that parameter space. A joint analysis of the convergence PDF and shear 2-point function also reduces the impact of shape measurement systematics, to which the PDF is less susceptible, and improves the total figure of merit by a factor of $2-3$, depending on the level of systematics. Finally, we present a correction factor necessary for calculating the unbiased Fisher information from finite differences using a limited number of cosmological simulations.
A probability distribution approach to synthetic turbulence time series
Sinhuber, Michael; Bodenschatz, Eberhard; Wilczek, Michael
2016-11-01
The statistical features of turbulence can be described in terms of multi-point probability density functions (PDFs). The complexity of these statistical objects increases rapidly with the number of points. This raises the question of how much information has to be incorporated into statistical models of turbulence to capture essential features such as inertial-range scaling and intermittency. Using high Reynolds number hot-wire data obtained at the Variable Density Turbulence Tunnel at the Max Planck Institute for Dynamics and Self-Organization, we establish a PDF-based approach on generating synthetic time series that reproduce those features. To do this, we measure three-point conditional PDFs from the experimental data and use an adaption-rejection method to draw random velocities from this distribution to produce synthetic time series. Analyzing these synthetic time series, we find that time series based on even low-dimensional conditional PDFs already capture some essential features of real turbulent flows.
Probability distributions for one component equations with multiplicative noise
Deutsch, J M
1993-01-01
Abstract: Systems described by equations involving both multiplicative and additive noise are common in nature. Examples include convection of a passive scalar field, polymersin turbulent flow, and noise in dye lasers. In this paper the one component version of this problem is studied. The steady state probability distribution is classified into two different types of behavior. One class has power law tails and the other is of the form of an exponential to a power law. The value of the power law exponent is determined analytically for models having colored gaussian noise. It is found to only depend on the power spectrum of the noise at zero frequency. When non-gaussian noise is considered it is shown that stretched exponential tails are possible. An intuitive understanding of the results is found and makes use of the Lyapunov exponents for these systems.
Gesture Recognition Based on the Probability Distribution of Arm Trajectories
Wan, Khairunizam; Sawada, Hideyuki
The use of human motions for the interaction between humans and computers is becoming an attractive alternative to verbal media, especially through the visual interpretation of the human body motion. In particular, hand gestures are used as non-verbal media for the humans to communicate with machines that pertain to the use of the human gestures to interact with them. This paper introduces a 3D motion measurement of the human upper body for the purpose of the gesture recognition, which is based on the probability distribution of arm trajectories. In this study, by examining the characteristics of the arm trajectories given by a signer, motion features are selected and classified by using a fuzzy technique. Experimental results show that the use of the features extracted from arm trajectories effectively works on the recognition of dynamic gestures of a human, and gives a good performance to classify various gesture patterns.
Seismic pulse propagation with constant Q and stable probability distributions
M. Tomirotti
1997-06-01
Full Text Available The one-dimensional propagation of seismic waves with constant Q is shown to be governed by an evolution equation of fractional order in time, which interpolates the heat equation and the wave equation. The fundamental solutions for the Cauchy and Signalling problems are expressed in terms of entire functions (of Wright type in the similarity variable and their behaviours turn out to be intermediate between those for the limiting cases of a perfectly viscous fluid and a perfectly elastic solid. In view of the small dissipation exhibited by the seismic pulses, the nearly elastic limit is considered. Furthermore, the fundamental solutions for the Cauchy and Signalling problems are shown to be related to stable probability distributions with an index of stability determined by the order of the fractional time derivative in the evolution equation.
Seismic pulse propagation with constant Q and stable probability distributions
Mainardi, Francesco
2010-01-01
The one-dimensional propagation of seismic waves with constant Q is shown to be governed by an evolution equation of fractional order in time, which interpolates the heat equation and the wave equation. The fundamental solutions for the Cauchy and Signalling problems are expressed in terms of entire functions (of Wright type) in the similarity variable and their behaviours turn out to be intermediate between those for the limiting cases of a perfectly viscous fluid and a perfectly elastic solid. In view of the small dissipation exhibited by the seismic pulses, the nearly elastic limit is considered. Furthermore, the fundamental solutions for the Cauchy and Signalling problems are shown to be related to stable probability distributions with index of stability determined by the order of the fractional time derivative in the evolution equation.
无
2009-01-01
【说词】1. He can probably tell us the truth.2. Will it rain this afternoong ？ Probably【解语】作副词，意为“大概、或许”，表示可能性很大，通常指根据目前情况作出积极推测或判断；
Insights from probability distribution functions of intensity maps
Breysse, Patrick C; Behroozi, Peter S; Dai, Liang; Kamionkowski, Marc
2016-01-01
In the next few years, intensity-mapping surveys that target lines such as CO, Ly$\\alpha$, and CII stand to provide powerful probes of high-redshift astrophysics. However, these line emissions are highly non-Gaussian, and so the typical power-spectrum methods used to study these maps will leave out a significant amount of information. We propose a new statistic, the probability distribution of voxel intensities, which can access this extra information. Using a model of a CO intensity map at $z\\sim3$ as an example, we demonstrate that this voxel intensity distribution (VID) provides substantial constraining power beyond what is obtainable from the power spectrum alone. We find that a future survey similar to the planned COMAP Full experiment could constrain the CO luminosity function to order $\\sim10\\%$. We also explore the effects of contamination from continuum emission, interloper lines, and gravitational lensing on our constraints and find that the VID statistic retains significant constraining power even ...
Mixed Dark Matter from Axino Distribution
Bonometto, S A; Masiero, A
1994-01-01
We study the possibility of mixed dark matter obtained through the phase space distribution of a single particle. An example is offered in the context of SUSY models with a Peccei-Quinn symmetry. Axinos in the 100 keV range can naturally have both thermal and non-thermal components. The latter one arises from the lightest neutralino decays and derelativizes at z ~ 10^4.
Simulations of the Hadamard Variance: Probability Distributions and Confidence Intervals.
Ashby, Neil; Patla, Bijunath
2016-04-01
Power-law noise in clocks and oscillators can be simulated by Fourier transforming a modified spectrum of white phase noise. This approach has been applied successfully to simulation of the Allan variance and the modified Allan variance in both overlapping and nonoverlapping forms. When significant frequency drift is present in an oscillator, at large sampling times the Allan variance overestimates the intrinsic noise, while the Hadamard variance is insensitive to frequency drift. The simulation method is extended in this paper to predict the Hadamard variance for the common types of power-law noise. Symmetric real matrices are introduced whose traces-the sums of their eigenvalues-are equal to the Hadamard variances, in overlapping or nonoverlapping forms, as well as for the corresponding forms of the modified Hadamard variance. We show that the standard relations between spectral densities and Hadamard variance are obtained with this method. The matrix eigenvalues determine probability distributions for observing a variance at an arbitrary value of the sampling interval τ, and hence for estimating confidence in the measurements.
Energy probability distribution zeros: A route to study phase transitions
Costa, B. V.; Mól, L. A. S.; Rocha, J. C. S.
2017-07-01
In the study of phase transitions a very few models are accessible to exact solution. In most cases analytical simplifications have to be done or some numerical techniques have to be used to get insight about their critical properties. Numerically, the most common approaches are those based on Monte Carlo simulations together with finite size scaling analysis. The use of Monte Carlo techniques requires the estimation of quantities like the specific heat or susceptibilities in a wide range of temperaturesor the construction of the density of states in large intervals of energy. Although many of these techniques are well developed they may be very time consuming when the system size becomes large enough. It should be suitable to have a method that could surpass those difficulties. In this work we present an iterative method to study the critical behavior of a system based on the partial knowledge of the complex Fisher zeros set of the partition function. The method is general with advantages over most conventional techniques since it does not need to identify any order parameter a priori. The critical temperature and exponents can be obtained with great precision even in the most unamenable cases like the two dimensional XY model. To test the method and to show how it works we applied it to some selected models where the transitions are well known: The 2D Ising, Potts and XY models and to a homopolymer system. Our choices cover systems with first order, continuous and Berezinskii-Kosterlitz-Thouless transitions as well as the homopolymer that has two pseudo-transitions. The strategy can easily be adapted to any model, classical or quantum, once we are able to build the corresponding energy probability distribution.
Performance Probability Distributions for Sediment Control Best Management Practices
Ferrell, L.; Beighley, R.; Walsh, K.
2007-12-01
Controlling soil erosion and sediment transport can be a significant challenge during the construction process due to the extent and conditions of bare, disturbed soils. Best Management Practices (BMPs) are used as the framework for the design of sediment discharge prevention systems in stormwater pollution prevention plans which are typically required for construction sites. This research focuses on commonly-used BMP systems for perimeter control of sediment export: silt fences and fiber rolls. Although these systems are widely used, the physical and engineering parameters describing their performance are not well understood. Performance expectations are based on manufacturer results, but due to the dynamic conditions that exist on a construction site performance expectations are not always achievable in the field. Based on experimental results product performance is shown to be highly variable. Experiments using the same installation procedures show inconsistent sediment removal performances ranging from (>)85 percent to zero. The goal of this research is to improve the determination of off-site sediment yield based on probabilistic performance results of perimeter control BMPs. BMPs are evaluated in the Soil Erosion Research Laboratory (SERL) in the Civil and Environmental Engineering department at San Diego State University. SERL experiments are performed on a 3-m by 10-m tilting soil bed with a soil depth of 0.5 meters and a slope of 33 percent. The simulated storm event consists of 17 mm/hr for 20 minutes followed by 51 mm/hr for 30 minutes. The storm event is based on an ASTM design storm intended to simulate BMP failures. BMP performance is assessed based on experiments where BMPs are installed per manufacture specifications, less than optimal installations, and no treatment conditions. Preliminary results from 30 experiments are presented and used to develop probability distributions for BMP sediment removal efficiencies. The results are then combined with
The Probability Distribution of Inter-car Spacings
Xian, Jin Guo; Han, Dong
In this paper, the celluar automation model with Fukui-Ishibashi-type acceleration rule is used to study the inter-car spacing distribution for traffic flow. The method used in complex network analysis is applied to study the spacings distribution. By theoretical analysis, we obtain the result that the distribution of inter-car spacings follows power law when vehicle density is low and spacing is not large, while, when the vehicle density is high or the spacing is large, the distribution can be described by exponential distribution. Moreover, the numerical simulations support the theoretical result.
Conant, Darcy Lynn
2013-01-01
Stochastic understanding of probability distribution undergirds development of conceptual connections between probability and statistics and supports development of a principled understanding of statistical inference. This study investigated the impact of an instructional course intervention designed to support development of stochastic…
Conant, Darcy Lynn
2013-01-01
Stochastic understanding of probability distribution undergirds development of conceptual connections between probability and statistics and supports development of a principled understanding of statistical inference. This study investigated the impact of an instructional course intervention designed to support development of stochastic…
Measuring Robustness of Timetables in Stations using a Probability Distribution
Jensen, Lars Wittrup; Landex, Alex
of a station based on the plan of operation and the minimum headway times However, none of the above methods take a given timetable into account when the complexity of the station is calculated. E.g. two timetable candidates are given following the same plan of operation in a station; one will be more...... vulnerable to delays (less robust) while the other will be less vulnerable (more robust), but this cannot be measured by the above methods. In the light of this, the article will describe a new method where the complexity of a given station with a given timetable can be calculated based on a probability...... delays caused by interdependencies, and result in a more robust operation. Currently three methods to calculate the complexity of station exists: 1. Complexity of a station based on the track layout 2. Complexity of a station based on the probability of a conflict using a plan of operation 3. Complexity...
Baldi, Marco; Maccio, Andrea V
2011-01-01
We investigate the effects of a coupled Dark Energy (cDE) scalar field on the alignment between satellites and matter distributions in galaxy clusters. Using high-resolution N-body simulations for LCDM and cDE cosmological models, we compute the probability density distribution for the alignment angle between the satellite galaxies and underlying matter distributions, finding a difference between the two scenarios. With respect to LCDM, in cDE cosmologies the satellite galaxies are less preferentially located along the major axis of the matter distribution, possibly reducing the tension with obersevational data. A physical explanation is that the coupling between dark matter and dark energy acts as an additional tidal force on the satellite galaxies diminishing the alignments between their distribution and the matter one. Through a likelihood ratio test based on the generalized chi square statistics, the null hypothesis that the two probability distributions come from the same parent population is rejected at...
Interacting discrete Markov processes with power-law probability distributions
Ridley, Kevin D.; Jakeman, Eric
2017-09-01
During recent years there has been growing interest in the occurrence of long-tailed distributions, also known as heavy-tailed or fat-tailed distributions, which can exhibit power-law behaviour and often characterise physical systems that undergo very large fluctuations. In this paper we show that the interaction between two discrete Markov processes naturally generates a time-series characterised by such a distribution. This possibility is first demonstrated by numerical simulation and then confirmed by a mathematical analysis that enables the parameter range over which the power-law occurs to be quantified. The results are supported by comparison of numerical results with theoretical predictions and general conclusions are drawn regarding mechanisms that can cause this behaviour.
Martingale Couplings and Bounds on the Tails of Probability Distributions
Luh, Kyle J
2011-01-01
Hoeffding has shown that tail bounds on the distribution for sampling from a finite population with replacement also apply to the corresponding cases of sampling without replacement. (A special case of this result is that binomial tail bounds apply to the corresponding hypergeometric tails.) We give a new proof of Hoeffding's result by constructing a martingale coupling between the sampling distributions. This construction is given by an explicit combinatorial procedure involving balls and urns. We then apply this construction to create martingale couplings between other pairs of sampling distributions, both without replacement and with "surreplacement" (that is, sampling in which not only is the sampled individual replaced, but some number of "copies" of that individual are added to the population).
Subjective Probability Distribution Elicitation in Cost Risk Analysis: A Review
2007-01-01
DeGroot , Morris H., Optimal Statistical Decisions, New York: McGraw-Hill, 1970. Dewar, James A., Assumption-Based Planning: A Tool for Reducing...formal decision-analysis point of view. See DeGroot (1970) for a clear exposition of utility in decision analysis. 2 For the triangle distribution, the
Sampling Random Bioinformatics Puzzles using Adaptive Probability Distributions
Have, Christian Theil; Appel, Emil Vincent; Bork-Jensen, Jette
2016-01-01
We present a probabilistic logic program to generate an educational puzzle that introduces the basic principles of next generation sequencing, gene finding and the translation of genes to proteins following the central dogma in biology. In the puzzle, a secret "protein word" must be found...... by assembling DNA from fragments (reads), locating a gene in this sequence and translating the gene to a protein. Sampling using this program generates random instance of the puzzle, but it is possible constrain the difficulty and to customize the secret protein word. Because of these constraints...... and the randomness of the generation process, sampling may fail to generate a satisfactory puzzle. To avoid failure we employ a strategy using adaptive probabilities which change in response to previous steps of generative process, thus minimizing the risk of failure....
Looking into Analytical Approximations for Three-flavor Neutrino Oscillation Probabilities in Matter
Li, Yu-Feng; Zhou, Shun; Zhu, Jing-yu
2016-01-01
Motivated by tremendous progress in neutrino oscillation experiments, we derive a new set of simple and compact formulas for three-flavor neutrino oscillation probabilities in matter of a constant density. A useful definition of the $\\eta$-gauge neutrino mass-squared difference $\\Delta^{}_* \\equiv \\eta \\Delta^{}_{31} + (1-\\eta) \\Delta^{}_{32}$ is introduced, where $\\Delta^{}_{ji} \\equiv m^2_j - m^2_i$ for $ji = 21, 31, 32$ are the ordinary neutrino mass-squared differences and $0 \\leq \\eta \\leq 1$ is a real and positive parameter. Expanding neutrino oscillation probabilities in terms of $\\alpha \\equiv \\Delta^{}_{21}/\\Delta^{}_*$, we demonstrate that the analytical formulas can be remarkably simplified for $\\eta = \\cos^2 \\theta^{}_{12}$, with $\\theta_{12}^{}$ being the solar mixing angle. As a by-product, the mapping from neutrino oscillation parameters in vacuum to their counterparts in matter is obtained at the order of ${\\cal O}(\\alpha^2)$. Finally, we show that our approximate formulas are not only valid f...
Looking into analytical approximations for three-flavor neutrino oscillation probabilities in matter
Li, Yu-Feng; Zhang, Jue; Zhou, Shun; Zhu, Jing-yu
2016-12-01
Motivated by tremendous progress in neutrino oscillation experiments, we derive a new set of simple and compact formulas for three-flavor neutrino oscillation probabilities in matter of a constant density. A useful definition of the η-gauge neutrino mass-squared difference Δ∗ ≡ ηΔ31 + (1 - η)Δ32 is introduced, where Δ ji ≡ m j 2 - m i 2 for ji = 21 , 31 , 32 are the ordinary neutrino mass-squared differences and 0 ≤ η ≤ 1 is a real and positive parameter. Expanding neutrino oscillation probabilities in terms of α ≡ Δ21 /Δ∗, we demonstrate that the analytical formulas can be remarkably simplified for η = cos2 θ 12, with θ 12 being the solar mixing angle. As a by-product, the mapping from neutrino oscillation parameters in vacuum to their counterparts in matter is obtained at the order of O({α}^2) . Finally, we show that our approximate formulas are not only valid for an arbitrary neutrino energy and any baseline length, but also still maintaining a high level of accuracy.
Extreme Points of the Convex Set of Joint Probability Distributions with Fixed Marginals
K R Parthasarathy
2007-11-01
By using a quantum probabilistic approach we obtain a description of the extreme points of the convex set of all joint probability distributions on the product of two standard Borel spaces with fixed marginal distributions.
The Probability for matter-Antimatter Segregation Following the Quark-Hadron Transition
Garfinkle, Moishe
2010-01-01
Cosmologists such Sakharov, Alfv\\'en, Klein, Weizs\\"acker, Gamow and Harrison all disregarded the distribution of baryons and antibaryons immediately prior to freeze-out in trying to elucidate the circumstances that explained hadron distribution in the early universe. They simply accepted a uniform distribution: each baryon paired with an antibaryon. Their acceptance of this assumption resulted in theoretical difficulties that could not be overcome. This essay discards this assumption of homogeneity or uniformity. Although this essay does deal with early-universe matters, it is not meant to indicate any involvement in energy distribution functions nor in any symmetry-asymmetry controversies. Cluster formation is strictly geometric. This essay has value as far as problems early cosmologists faced but also should complete the historic record.
Dark matter halo merger and accretion probabilities in the excursion set formalism
Alizadeh, Esfandiar
2008-01-01
The merger and accretion probabilities of dark matter halos have so far only been calculated for an infinitesimal time interval. This means that a Monte-Carlo simulation with very small time steps is necessary to find the merger history of a parent halo. In this paper we use the random walk formalism to find the merger and accretion probabilities of halos for a finite time interval. Specifically, we find the number density of halos at an early redshift that will become part of a halo with a specified final mass at a later redshift, given that they underwent $n$ major mergers, $n=0,1,2,...$ . We reduce the problem into an integral equation which we then solve numerically. To ensure the consistency of our formalism we compare the results with Monte-Carlo simulations and find very good agreement. Though we have done our calculation assuming a flat barrier, the more general case can easily be handled using our method. This derivation of finite time merger and accretion probabilities can be used to make more effic...
Probability distribution analysis of observational extreme events and model evaluation
Yu, Q.; Lau, A. K. H.; Fung, J. C. H.; Tsang, K. T.
2016-12-01
Earth's surface temperatures were the warmest in 2015 since modern record-keeping began in 1880, according to the latest study. In contrast, a cold weather occurred in many regions of China in January 2016, and brought the first snowfall to Guangzhou, the capital city of Guangdong province in 67 years. To understand the changes of extreme weather events as well as project its future scenarios, this study use statistical models to analyze on multiple climate data. We first use Granger-causality test to identify the attribution of global mean temperature rise and extreme temperature events with CO2 concentration. The four statistical moments (mean, variance, skewness, kurtosis) of daily maximum temperature distribution is investigated on global climate observational, reanalysis (1961-2010) and model data (1961-2100). Furthermore, we introduce a new tail index based on the four moments, which is a more robust index to measure extreme temperatures. Our results show that the CO2 concentration can provide information to the time series of mean and extreme temperature, but not vice versa. Based on our new tail index, we find that other than mean and variance, skewness is an important indicator should be considered to estimate extreme temperature changes and model evaluation. Among the 12 climate model data we investigate, the fourth version of Community Climate System Model (CCSM4) from National Center for Atmospheric Research performs well on the new index we introduce, which indicate the model have a substantial capability to project the future changes of extreme temperature in the 21st century. The method also shows its ability to measure extreme precipitation/ drought events. In the future we will introduce a new diagram to systematically evaluate the performance of the four statistical moments in climate model output, moreover, the human and economic impacts of extreme weather events will also be conducted.
Batch Mode Active Sampling based on Marginal Probability Distribution Matching.
Chattopadhyay, Rita; Wang, Zheng; Fan, Wei; Davidson, Ian; Panchanathan, Sethuraman; Ye, Jieping
2012-01-01
Active Learning is a machine learning and data mining technique that selects the most informative samples for labeling and uses them as training data; it is especially useful when there are large amount of unlabeled data and labeling them is expensive. Recently, batch-mode active learning, where a set of samples are selected concurrently for labeling, based on their collective merit, has attracted a lot of attention. The objective of batch-mode active learning is to select a set of informative samples so that a classifier learned on these samples has good generalization performance on the unlabeled data. Most of the existing batch-mode active learning methodologies try to achieve this by selecting samples based on varied criteria. In this paper we propose a novel criterion which achieves good generalization performance of a classifier by specifically selecting a set of query samples that minimizes the difference in distribution between the labeled and the unlabeled data, after annotation. We explicitly measure this difference based on all candidate subsets of the unlabeled data and select the best subset. The proposed objective is an NP-hard integer programming optimization problem. We provide two optimization techniques to solve this problem. In the first one, the problem is transformed into a convex quadratic programming problem and in the second method the problem is transformed into a linear programming problem. Our empirical studies using publicly available UCI datasets and a biomedical image dataset demonstrate the effectiveness of the proposed approach in comparison with the state-of-the-art batch-mode active learning methods. We also present two extensions of the proposed approach, which incorporate uncertainty of the predicted labels of the unlabeled data and transfer learning in the proposed formulation. Our empirical studies on UCI datasets show that incorporation of uncertainty information improves performance at later iterations while our studies on 20
Li, Yu-Feng
2015-01-01
In the presence of both direct and indirect unitarity violation in the lepton mixing matrix, we derive a complete set of series expansion formulas for neutrino oscillation probabilities in matter of constant density. Expansions in the mass hierarchy parameter $\\alpha \\equiv \\Delta m_{21}^{2} / \\Delta m_{31}^{2}$ and those unitarity violation parameters $s^{2}_{ij}$ (for i = 1, 2, 3 and j = 4, 5, 6) up to the first order are studied in this paper. We analyse the accuracy of the analytical series expansion formulas in different regions of L / E. A detailed numerical analysis is also performed, of which the different effects of the direct and the indirect unitarity violation are particularly emphasized. We also study in this paper the summed $\
Calisto, H.; Bologna, M.
2007-05-01
We report an exact result for the calculation of the probability distribution of the Bernoulli-Malthus-Verhulst model driven by a multiplicative colored noise. We study the conditions under which the probability distribution of the Malthus-Verhulst model can exhibit a transition from a unimodal to a bimodal distribution depending on the value of a critical parameter. Also we show that the mean value of x(t) in the latter model always approaches asymptotically the value 1.
Thomas, Drew M
2013-01-01
A dust grain in a plasma has a fluctuating electric charge, and past work concludes that spherical grains in a stationary, collisionless plasma have an essentially Gaussian charge probability distribution. This paper extends that work to flowing plasmas and arbitrarily large spheres, deriving analytic charge probability distributions up to normalizing constants. We find that these distributions also have good Gaussian approximations, with analytic expressions for their mean and variance.
The probability distribution of the predicted CFM-induced ozone depletion. [Chlorofluoromethane
Ehhalt, D. H.; Chang, J. S.; Bulter, D. M.
1979-01-01
It is argued from the central limit theorem that the uncertainty in model predicted changes of the ozone column density is best represented by a normal probability density distribution. This conclusion is validated by comparison with a probability distribution generated by a Monte Carlo technique. In the case of the CFM-induced ozone depletion, and based on the estimated uncertainties in the reaction rate coefficients alone the relative mean standard deviation of this normal distribution is estimated to be 0.29.
Constructing the probability distribution function for the total capacity of a power system
Vasin, V.P.; Prokhorenko, V.I.
1980-01-01
The difficulties involved in constructing the probability distribution function for the total capacity of a power system consisting of numerous power plants are discussed. A method is considered for the approximate determination of such a function by a Monte Carlo method and by exact calculation based on special recursion formulas on a particular grid of argument values. It is shown that there may be significant deviations between the true probability distribution and a normal distribution.
Elements of probability theory
Rumshiskii, L Z
1965-01-01
Elements of Probability Theory presents the methods of the theory of probability. This book is divided into seven chapters that discuss the general rule for the multiplication of probabilities, the fundamental properties of the subject matter, and the classical definition of probability. The introductory chapters deal with the functions of random variables; continuous random variables; numerical characteristics of probability distributions; center of the probability distribution of a random variable; definition of the law of large numbers; stability of the sample mean and the method of moments
The effect of neutrinos on the matter distribution as probed by the Intergalactic Medium
Viel, Matteo; Springel, Volker
2010-01-01
We present a suite of full hydrodynamical cosmological simulations that quantitatively address the impact of neutrinos on the (mildly non-linear) spatial distribution of matter and in particular on the neutral hydrogen distribution in the Intergalactic Medium (IGM), which is responsible for the intervening Lyman-alpha absorption in quasar spectra. The free-streaming of neutrinos results in a (non-linear) scale-dependent suppression of power spectrum of the total matter distribution at scales probed by Lyman-alpha forest data which is larger than the linear theory prediction by about 25% and strongly redshift dependent. By extracting a set of realistic mock quasar spectra, we quantify the effect of neutrinos on the flux probability distribution function and flux power spectrum. The differences in the matter power spectra translate into a ~2.5% (5%) difference in the flux power spectrum for neutrino masses with Sigma m_{\
A New Probability of Detection Model for Updating Crack Distribution of Offshore Structures
李典庆; 张圣坤; 唐文勇
2003-01-01
There exists model uncertainty of probability of detection for inspecting ship structures with nondestructive inspection techniques. Based on a comparison of several existing probability of detection (POD) models, a new probability of detection model is proposed for the updating of crack size distribution. Furthermore, the theoretical derivation shows that most existing probability of detection models are special cases of the new probability of detection model. The least square method is adopted for determining the values of parameters in the new POD model. This new model is also compared with other existing probability of detection models. The results indicate that the new probability of detection model can fit the inspection data better. This new probability of detection model is then applied to the analysis of the problem of crack size updating for offshore structures. The Bayesian updating method is used to analyze the effect of probability of detection models on the posterior distribution of a crack size. The results show that different probabilities of detection models generate different posterior distributions of a crack size for offshore structures.
Some possible q-exponential type probability distribution in the non-extensive statistical physics
Chung, Won Sang
2016-08-01
In this paper, we present two exponential type probability distributions which are different from Tsallis’s case which we call Type I: one given by pi = 1 Zq[eq(Ei)]-β (Type IIA) and another given by pi = 1 Zq[eq(-β)]Ei (Type IIIA). Starting with the Boltzman-Gibbs entropy, we obtain the different probability distribution by using the Kolmogorov-Nagumo average for the microstate energies. We present the first-order differential equations related to Types I, II and III. For three types of probability distributions, we discuss the quantum harmonic oscillator, two-level problem and the spin-1 2 paramagnet.
Probability distributions for directed polymers in random media with correlated noise
Chu, Sherry; Kardar, Mehran
2016-07-01
The probability distribution for the free energy of directed polymers in random media (DPRM) with uncorrelated noise in d =1 +1 dimensions satisfies the Tracy-Widom distribution. We inquire if and how this universal distribution is modified in the presence of spatially correlated noise. The width of the distribution scales as the DPRM length to an exponent β , in good (but not full) agreement with previous renormalization group and numerical results. The scaled probability is well described by the Tracy-Widom form for uncorrelated noise, but becomes symmetric with increasing correlation exponent. We thus find a class of distributions that continuously interpolates between Tracy-Widom and Gaussian forms.
Collective motions of globally coupled oscillators and some probability distributions on circle
Jaćimović, Vladimir [Faculty of Natural Sciences and Mathematics, University of Montenegro, Cetinjski put, bb., 81000 Podgorica (Montenegro); Crnkić, Aladin, E-mail: aladin.crnkic@hotmail.com [Faculty of Technical Engineering, University of Bihać, Ljubijankićeva, bb., 77000 Bihać, Bosnia and Herzegovina (Bosnia and Herzegovina)
2017-06-28
In 2010 Kato and Jones described a new family of probability distributions on circle, obtained as Möbius transformation of von Mises distribution. We present the model demonstrating that these distributions appear naturally in study of populations of coupled oscillators. We use this opportunity to point out certain relations between Directional Statistics and collective motion of coupled oscillators. - Highlights: • We specify probability distributions on circle that arise in Kuramoto model. • We study how the mean-field coupling affects the shape of distribution of phases. • We discuss potential applications in some experiments on cell cycle. • We apply Directional Statistics to study collective dynamics of coupled oscillators.
Seasonal distribution of organic matter in mangrove environment of Goa
Jagtap, T.G.
Water and sediments were studied for the distribution of suspended matter, organic carbon and nitrogen Suspended matter ranged from 3-373 mg.l-1 while particulate organic carbon (POC) from 0.03-9.94 mg.l-1 POC value showed significant correlation...
A measure of mutual divergence among a number of probability distributions
J. N. Kapur
1987-01-01
major inequalities due to Shannon, Renyi and Holder. The inequalities are then used to obtain some useful results in information theory. In particular measures are obtained to measure the mutual divergence among two or more probability distributions.
The probability distribution of fatigue damage and the statistical moment of fatigue life
熊峻江; 高镇同
1997-01-01
The randomization of deterministic fatigue damage equation leads to the stochastic differential equation and the Fokker-Planck equation affected by random fluctuation. By means of the solution of equation, the probability distribution of fatigue damage with the change of time is obtained. Then the statistical moment of fatigue life in consideration of the stationary random fluctuation is derived. Finally, the damage probability distributions during the fatigue crack initiation and fatigue crack growth are given
Schjær-Jacobsen, Hans
2012-01-01
to the understanding of similarities and differences of the two approaches as well as practical applications. The probability approach offers a good framework for representation of randomness and variability. Once the probability distributions of uncertain parameters and their correlations are known the resulting...... uncertainty can be calculated. The possibility approach is particular well suited for representation of uncertainty of a non-statistical nature due to lack of knowledge and requires less information than the probability approach. Based on the kind of uncertainty and knowledge present, these aspects...... by probability distributions is readily done by means of Monte Carlo simulation. Calculation of non-monotonic functions of possibility distributions is done within the theoretical framework of fuzzy intervals, but straight forward application of fuzzy arithmetic in general results in overestimation of interval...
Frank, Steven A
2010-01-01
Commonly observed patterns typically follow a few distinct families of probability distributions. Over one hundred years ago, Karl Pearson provided a systematic derivation and classification of the common continuous distributions. His approach was phenomenological: a differential equation that generated common distributions without any underlying conceptual basis for why common distributions have particular forms and what explains the familial relations. Pearson's system and its descendants remain the most popular systematic classification of probability distributions. Here, we unify the disparate forms of common distributions into a single system based on two meaningful and justifiable propositions. First, distributions follow maximum entropy subject to constraints, where maximum entropy is equivalent to minimum information. Second, different problems associate magnitude to information in different ways, an association we describe in terms of the relation between information invariance and measurement scale....
Predicting the probability of slip in gait: methodology and distribution study.
Gragg, Jared; Yang, James
2016-01-01
The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.
WIENER-HOPF SOLVER WITH SMOOTH PROBABILITY DISTRIBUTIONS OF ITS COMPONENTS
Mr. Vladimir A. Smagin
2016-12-01
Full Text Available The Wiener – Hopf solver with smooth probability distributions of its component is presented. The method is based on hyper delta approximations of initial distributions. The use of Fourier series transformation and characteristic function allows working with the random variable method concentrated in transversal axis of absc.
Matias-Peralta, Hazel Monica; Ghodsi, Alireza; Shitan, Mahendran; Yusoff, Fatimah Md.
Copepods are the most abundant microcrustaceans in the marine waters and are the major food resource for many commercial fish species. In addition, changes in the distribution and population composition of copepods may also serve as an indicator of global climate changes. Therefore, it is important to model the copepod distribution in different ecosystems. Copepod samples were collected from three different ecosystems (seagrass area, cage aquaculture area and coastal waters off shrimp aquaculture farm) along the coastal waters of the Malacca Straits over a one year period. In this study the major statistical analysis consisted of fitting different probability models. This paper highlights the fitting of probability distributions and discusses the adequateness of the fitted models. The usefulness of these fitted models would enable one to make probability statements about the distribution of copepods in three different ecosystems.
A Class of Chaotic Sequences with Gauss Probability Distribution for Radar Mask Jamming
Ni-Ni Rao; Yu-Chuan Huang; Bin Liu
2007-01-01
A simple generation approach for chaotic sequences with Gauss probability distribution is proposed. Theoretical analysis and simulation based on Logistic chaotic model show that the approach is feasible and effective. The distribution characteristics of the novel chaotic sequence are comparable to that of the standard normal distribution. Its mean and variance can be changed to the desired values. The novel sequences have also good randomness. The applications for radar mask jamming are analyzed.
Marshman, Emily; Singh, Chandralekha
2017-03-01
A solid grasp of the probability distributions for measuring physical observables is central to connecting the quantum formalism to measurements. However, students often struggle with the probability distributions of measurement outcomes for an observable and have difficulty expressing this concept in different representations. Here we first describe the difficulties that upper-level undergraduate and PhD students have with the probability distributions for measuring physical observables in quantum mechanics. We then discuss how student difficulties found in written surveys and individual interviews were used as a guide in the development of a quantum interactive learning tutorial (QuILT) to help students develop a good grasp of the probability distributions of measurement outcomes for physical observables. The QuILT strives to help students become proficient in expressing the probability distributions for the measurement of physical observables in Dirac notation and in the position representation and be able to convert from Dirac notation to position representation and vice versa. We describe the development and evaluation of the QuILT and findings about the effectiveness of the QuILT from in-class evaluations.
Score distributions of gapped multiple sequence alignments down to the low-probability tail
Fieth, Pascal; Hartmann, Alexander K.
2016-08-01
Assessing the significance of alignment scores of optimally aligned DNA or amino acid sequences can be achieved via the knowledge of the score distribution of random sequences. But this requires obtaining the distribution in the biologically relevant high-scoring region, where the probabilities are exponentially small. For gapless local alignments of infinitely long sequences this distribution is known analytically to follow a Gumbel distribution. Distributions for gapped local alignments and global alignments of finite lengths can only be obtained numerically. To obtain result for the small-probability region, specific statistical mechanics-based rare-event algorithms can be applied. In previous studies, this was achieved for pairwise alignments. They showed that, contrary to results from previous simple sampling studies, strong deviations from the Gumbel distribution occur in case of finite sequence lengths. Here we extend the studies to multiple sequence alignments with gaps, which are much more relevant for practical applications in molecular biology. We study the distributions of scores over a large range of the support, reaching probabilities as small as 10-160, for global and local (sum-of-pair scores) multiple alignments. We find that even after suitable rescaling, eliminating the sequence-length dependence, the distributions for multiple alignment differ from the pairwise alignment case. Furthermore, we also show that the previously discussed Gaussian correction to the Gumbel distribution needs to be refined, also for the case of pairwise alignments.
Probability collectives a distributed multi-agent system approach for optimization
Kulkarni, Anand Jayant; Abraham, Ajith
2015-01-01
This book provides an emerging computational intelligence tool in the framework of collective intelligence for modeling and controlling distributed multi-agent systems referred to as Probability Collectives. In the modified Probability Collectives methodology a number of constraint handling techniques are incorporated, which also reduces the computational complexity and improved the convergence and efficiency. Numerous examples and real world problems are used for illustration, which may also allow the reader to gain further insight into the associated concepts.
The Exit Distribution for Smart Kinetic Walk with Symmetric and Asymmetric Transition Probability
Dai, Yan
2017-03-01
It has been proved that the distribution of the point where the smart kinetic walk (SKW) exits a domain converges in distribution to harmonic measure on the hexagonal lattice. For other lattices, it is believed that this result still holds, and there is good numerical evidence to support this conjecture. Here we examine the effect of the symmetry and asymmetry of the transition probability on each step of the SKW on the square lattice and test if the exit distribution converges in distribution to harmonic measure as well. From our simulations, the limiting exit distribution of the SKW with a non-uniform but symmetric transition probability as the lattice spacing goes to zero is the harmonic measure. This result does not hold for asymmetric transition probability. We are also interested in the difference between the SKW with symmetric transition probability exit distribution and harmonic measure. Our simulations provide strong support for a explicit conjecture about this first order difference. The explicit formula for the conjecture will be given below.
Evaluation of probability distributions for concentration fluctuations in a building array
Efthimiou, G. C.; Andronopoulos, S.; Bartzis, J. G.
2017-10-01
The wide range of values observed in a measured concentration time series after the release of a dispersing airborne pollutant from a point source in the atmospheric boundary layer, and the hazard level associated with the peak values, demonstrate the necessity of predicting the concentration probability distribution. For this, statistical models describing the probability of occurrence are preferably employed. In this paper a concentration database pertaining to a field experiment of dispersion in an urban-like area (MUST experiment) from a continuously emitting source is used for the selection of the best performing statistical model between the Gamma and the Beta distributions. The skewness, the kurtosis as well as the inverses of the cumulative distribution function were compared between the two statistical models and the experiment. The evaluation is performed in the form of validation metrics such as the Fractional Bias (FB), the Normalized Mean Square Error and the factor-of-2 percentage. The Beta probability distribution agreed with the experimental results better than the Gamma probability distribution except for the 25th percentile. Also according to the significant tests using the BOOT software the Beta model presented FB and NMSE values that are statistical different than the ones of the Gamma model except the 75th percentiles and the FB of the 99th percentiles. The effect of the stability conditions and source heights on the performance of the statistical models is also examined. For both cases the performance of the Beta distribution was slightly better than that of the Gamma.
Xian-min Geng; Shu-chen Wan
2011-01-01
The compound negative binomial model, introduced in this paper, is a discrete time version. We discuss the Markov properties of the surplus process, and study the ruin probability and the joint distributions of actuarial random vectors in this model. By the strong Markov property and the mass function of a defective renewal sequence, we obtain the explicit expressions of the ruin probability, the finite-horizon ruin probability,the joint distributions of T, U(T - 1), |U(T)| and inf 0≤n＜T1 U(n) (i.e., the time of ruin, the surplus immediately before ruin, the deficit at ruin and maximal deficit from ruin to recovery) and the distributions of some actuariai random vectors.
Net-charge probability distributions in heavy ion collisions at chemical freeze-out
Braun-Munzinger, P; Karsch, F; Redlich, K; Skokov, V
2011-01-01
We explore net charge probability distributions in heavy ion collisions within the hadron resonance gas model. The distributions for strangeness, electric charge and baryon number are derived. We show that, within this model, net charge probability distributions and the resulting fluctuations can be computed directly from the measured yields of charged and multi-charged hadrons. The influence of multi-charged particles and quantum statistics on the shape of the distribution is examined. We discuss the properties of the net proton distribution along the chemical freeze-out line. The model results presented here can be compared with data at RHIC energies and at the LHC to possibly search for the relation between chemical freeze-out and QCD cross-over lines in heavy ion collisions.
Diogo de Carvalho Bezerra
2015-12-01
Full Text Available ABSTRACT Contributions from the sensitivity analysis of the parameters of the linear programming model for the elicitation of experts' beliefs are presented. The process allows for the calibration of the family of probability distributions obtained in the elicitation process. An experiment to obtain the probability distribution of a future event (Brazil vs. Spain soccer game in the 2013 FIFA Confederations Cup final game was conducted. The proposed sensitivity analysis step may help to reduce the vagueness of the information given by the expert.
THE LEBESGUE-STIELJES INTEGRAL AS APPLIED IN PROBABILITY DISTRIBUTION THEORY
bounded variation and Borel measureable functions are set forth in the introduction. Chapter 2 is concerned with establishing a one to one correspondence between LebesgueStieljes measures and certain equivalence classes of functions which are monotone non decreasing and continuous on the right. In Chapter 3 the Lebesgue-Stieljes Integral is defined and some of its properties are demonstrated. In Chapter 4 probability distribution function is defined and the notions in Chapters 2 and 3 are used to show that the Lebesgue-Stieljes integral of any probability distribution
Hanayama, Nobutane; Sibuya, Masaaki
2016-08-01
In modern biology, theories of aging fall mainly into two groups: damage theories and programed theories. If programed theories are true, the probability that human beings live beyond a specific age will be zero. In contrast, if damage theories are true, such an age does not exist, and a longevity record will be eventually destroyed. In this article, for examining real state, a special type of binomial model based on the generalized Pareto distribution has been applied to data of Japanese centenarians. From the results, it is concluded that the upper limit of lifetime probability distribution in the Japanese population has been estimated 123 years.
Gulev, Sergey; Tilinina, Natalia; Belyaev, Konstantin
2015-04-01
Surface turbulent heat fluxes from modern era and first generation reanalyses (NCEP-DOE, ERA-Interim, MERRA NCEP-CFSR, JRA) as well as from satellite products (SEAFLUX, IFREMER, HOAPS) were intercompared using framework of probability distributions for sensible and latent heat fluxes. For approximation of probability distributions and estimation of extreme flux values Modified Fisher-Tippett (MFT) distribution has been used. Besides mean flux values, consideration is given to the comparative analysis of (i) parameters of the MFT probability density functions (scale and location), (ii) extreme flux values corresponding high order percentiles of fluxes (e.g. 99th and higher) and (iii) fractional contribution of extreme surface flux events in the total surface turbulent fluxes integrated over months and seasons. The latter was estimated using both fractional distribution derived from MFT and empirical estimates based upon occurrence histograms. The strongest differences in the parameters of probability distributions of surface fluxes and extreme surface flux values between different reanalyses are found in the western boundary current extension regions and high latitudes, while the highest differences in the fractional contributions of surface fluxes may occur in mid ocean regions being closely associated with atmospheric synoptic dynamics. Generally, satellite surface flux products demonstrate relatively stronger extreme fluxes compared to reanalyses, even in the Northern Hemisphere midlatitudes where data assimilation input in reanalyses is quite dense compared to the Southern Ocean regions.
Lowe, V J; Bullard, A G; Coleman, R E
1995-12-01
The criteria used in the Prospective Investigation of Pulmonary Embolism Diagnosis (PIOPED) study for the interpretation of ventilation/perfusion scans are widely used and the probability of pulmonary embolism is determined from these criteria. The prevalence of pulmonary embolism in the PIOPED study was 33%. To investigate the similarity of patient populations who have ventilation/perfusion scans at one of the medical centers that participated in the PIOPED study and a small community hospital, the authors evaluated the probability category distributions of lung scans at the two institutions. They retrospectively interpreted 54 and 49 ventilation/perfusion lung scans selected from January, 1991, to June, 1992, at Duke University Medical Center and at Central Carolina Hospital, respectively. Studies were interpreted according to the PIOPED criteria. The percentage of studies assigned to each category at Duke University Medical Center and Central Carolina Hospital were 17% and 27% normal or very low probability, 31% and 59% low probability, 39% and 10% intermediate probability, and 13% and 4% high probability, respectively. The different distribution of probability categories between university and community hospitals suggests that the prevalence of disease may also be different. The post-test probability of pulmonary embolism is related to the prevalence of disease and the sensitivity and specificity of the ventilation/perfusion scan. Because these variables may differ in community hospital settings, the post-test probability of pulmonary embolism as determined by data from the PIOPED study should only be used in institutions with similar populations. Clinical management based upon the results of the PIOPED study may not be applicable to patients who have ventilation/perfusion scans performed in a community hospital.
Importance measures for imprecise probability distributions and their sparse grid solutions
WANG; Pan; LU; ZhenZhou; CHENG; Lei
2013-01-01
For the imprecise probability distribution of structural system, the variance based importance measures (IMs) of the inputs are investigated, and three IMs are defined on the conditions of random distribution parameters, interval distribution parameters and the mixture of those two types of distribution parameters. The defined IMs can reflect the influence of the inputs on the output of the structural system with imprecise distribution parameters, respectively. Due to the large computational cost of the variance based IMs, sparse grid method is employed in this work to compute the variance based IMs at each reference point of distribution parameters. For the three imprecise distribution parameter cases, the sparse grid method and the combination of sparse grid method with genetic algorithm are used to compute the defined IMs. Numerical and engineering examples are em-ployed to demonstrate the rationality of the defined IMs and the efficiency of the applied methods.
Marco Bee
2012-01-01
This paper deals with the estimation of the lognormal-Pareto and the lognormal-Generalized Pareto mixture distributions. The log-likelihood function is discontinuous, so that Maximum Likelihood Estimation is not asymptotically optimal. For this reason, we develop an alternative method based on Probability Weighted Moments. We show that the standard version of the method can be applied to the first distribution, but not to the latter. Thus, in the lognormal- Generalized Pareto case, we work ou...
Calculation of Radar Probability of Detection in K-Distributed Sea Clutter and Noise
2011-04-01
Expanded Swerling Target Models, IEEE Trans. AES 39 (2003) 1059-1069. 18. G. Arfken , Mathematical Methods for Physicists, Second Edition, Academic...form solution for the probability of detection in K-distributed clutter, so numerical methods are required. The K distribution is a compound model...the integration, with the nodes and weights calculated using matrix methods , so that a general purpose numerical integration routine is not required
Analysis of low probability of intercept (LPI) radar signals using the Wigner Distribution
Gau, Jen-Yu
2002-01-01
Approved for public release, distribution is unlimited The parameters of Low Probability of Intercept (LPI) radar signals are hard to identify by using traditional periodogram signal processing techniques. Using the Wigner Distribution (WD), this thesis examines eight types of LPI radar signals. Signal to noise ratios of 0 dB and -6dB are also investigated. The eight types LPI radar signals examined include Frequency Modulation Continuous Wave (FMCW), Frank code, P1 code, P2 code, P3 code,...
Daniel Ting
2010-04-01
Full Text Available Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1 input data size and criteria for structure inclusion (resolution, R-factor, etc.; 2 filtering of suspect conformations and outliers using B-factors or other features; 3 secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included; 4 the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5 whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp.
Steeneveld, W.; Gaag, van der L.C.; Barkema, H.W.; Hogeveen, H.
2009-01-01
Clinical mastitis (CM) can be caused by a wide variety of pathogens and farmers must start treatment before the actual causal pathogen is known. By providing a probability distribution for the causal pathogen, naive Bayesian networks (NBN) can serve as a management tool for farmers to decide which t
A.C.D. Donkers (Bas); T. Lourenco (Tania); B.G.C. Dellaert (Benedict); D.G. Goldstein (Daniel G.)
2013-01-01
textabstract In this paper we propose the use of preferred outcome distributions as a new method to elicit individuals' value and probability weighting functions in decisions under risk. Extant approaches for the elicitation of these two key ingredients of individuals' risk attitude typically rely
吕渭济; 崔巍
2001-01-01
In this paper, two kinds of models are presented and optimized for project investment risk income on the basis of probability X distribution. One kind of model being proved has only a maximal value and another kind being proved has no extreme values.
LU Wei-ji; CUI Wei
2001-01-01
In this paper, two kinds of models are presented and optimized for pro ject investment risk income on the basis of probability χ distribution. One kin d of model being proved has only a maximal value and another kind being proved h as no extreme values.
A. B. Levina
2016-03-01
Full Text Available Error detection codes are mechanisms that enable robust delivery of data in unreliable communication channels and devices. Unreliable channels and devices are error-prone objects. Respectively, error detection codes allow detecting such errors. There are two classes of error detecting codes - classical codes and security-oriented codes. The classical codes have high percentage of detected errors; however, they have a high probability to miss an error in algebraic manipulation. In order, security-oriented codes are codes with a small Hamming distance and high protection to algebraic manipulation. The probability of error masking is a fundamental parameter of security-oriented codes. A detailed study of this parameter allows analyzing the behavior of the error-correcting code in the case of error injection in the encoding device. In order, the complexity of the encoding function plays an important role in the security-oriented codes. Encoding functions with less computational complexity and a low probability of masking are the best protection of encoding device against malicious acts. This paper investigates the influence of encoding function complexity on the error masking probability distribution. It will be shownthat the more complex encoding function reduces the maximum of error masking probability. It is also shown in the paper that increasing of the function complexity changes the error masking probability distribution. In particular, increasing of computational complexity decreases the difference between the maximum and average value of the error masking probability. Our resultshave shown that functions with greater complexity have smoothed maximums of error masking probability, which significantly complicates the analysis of error-correcting code by attacker. As a result, in case of complex encoding function the probability of the algebraic manipulation is reduced. The paper discusses an approach how to measure the error masking
Aydın Kahriman
2011-11-01
Full Text Available Determine the diameter distribution of a stand and its relations with stand ages, site index, density and mixture percentage is very important both biologically and economically. The Weibull with two parameters, Weibull with three parameters, Gamma with two parameters, Gamma with three parameters, Beta, Lognormal with two parameters, Lognormal with three parameters, Normal, Johnson SB probability density functions were used to determination of diameter distributions. This study aimed to compared based on performance of describing different diameter distribution and to describe the best successful function of diameter distributions. The data were obtaited from 162 temporary sample plots measured Scots pine and Oriental beech mixed stands in Black Sea Region. The results show that four parameter Johnson SB function for both scots pine and oriental beech is the best successful function to describe diameter distributions based on error index values calculated by difference between observed and predicted diameter distributions.
Banik, S K; Ray, D S; Banik, Suman Kumar; Bag, Bidhan Chandra; Ray, Deb Shankar
2002-01-01
Traditionally, the quantum Brownian motion is described by Fokker-Planck or diffusion equations in terms of quasi-probability distribution functions, e.g., Wigner functions. These often become singular or negative in the full quantum regime. In this paper a simple approach to non-Markovian theory of quantum Brownian motion using {\\it true probability distribution functions} is presented. Based on an initial coherent state representation of the bath oscillators and an equilibrium canonical distribution of the quantum mechanical mean values of their co-ordinates and momenta we derive a generalized quantum Langevin equation in $c$-numbers and show that the latter is amenable to a theoretical analysis in terms of the classical theory of non-Markovian dynamics. The corresponding Fokker-Planck, diffusion and the Smoluchowski equations are the {\\it exact} quantum analogues of their classical counterparts. The present work is {\\it independent} of path integral techniques. The theory as developed here is a natural ext...
Wang, S Q; Zhang, H Y; Li, Z L
2016-10-01
Understanding spatio-temporal distribution of pest in orchards can provide important information that could be used to design monitoring schemes and establish better means for pest control. In this study, the spatial and temporal distribution of Bactrocera minax (Enderlein) (Diptera: Tephritidae) was assessed, and activity trends were evaluated by using probability kriging. Adults of B. minax were captured in two successive occurrences in a small-scale citrus orchard by using food bait traps, which were placed both inside and outside the orchard. The weekly spatial distribution of B. minax within the orchard and adjacent woods was examined using semivariogram parameters. The edge concentration was discovered during the most weeks in adult occurrence, and the population of the adults aggregated with high probability within a less-than-100-m-wide band on both of the sides of the orchard and the woods. The sequential probability kriged maps showed that the adults were estimated in the marginal zone with higher probability, especially in the early and peak stages. The feeding, ovipositing, and mating behaviors of B. minax are possible explanations for these spatio-temporal patterns. Therefore, spatial arrangement and distance to the forest edge of traps or spraying spot should be considered to enhance pest control on B. minax in small-scale orchards.
About the probability distribution of a quantity with given mean and variance
Olivares, Stefano
2012-01-01
Supplement 1 to GUM (GUM-S1) recommends the use of maximum entropy principle (MaxEnt) in determining the probability distribution of a quantity having specified properties, e.g., specified central moments. When we only know the mean value and the variance of a variable, GUM-S1 prescribes a Gaussian probability distribution for that variable. When further information is available, in the form of a finite interval in which the variable is known to lie, we indicate how the distribution for the variable in this case can be obtained. A Gaussian distribution should only be used in this case when the standard deviation is small compared to the range of variation (the length of the interval). In general, when the interval is finite, the parameters of the distribution should be evaluated numerically, as suggested by I. Lira, Metrologia, 46 L27 (2009). Here we note that the knowledge of the range of variation is equivalent to a bias of the distribution toward a flat distribution in that range, and the principle of mini...
冉洪流
2004-01-01
In recent years, some researchers have studied the paleoearthquake along the Haiyuan fault and revealed a lot of paleoearthquake events. All available information allows more reliable analysis of earthquake recurrence interval and earthquake rupture patterns along the Haiyuan fault. Based on this paleoseismological information, the recurrence probability and magnitude distribution for M≥6.7 earthquakes in future 100 years along the Haiyuan fault can be obtained through weighted computation by using Poisson and Brownian passage time models and considering different rupture patterns. The result shows that the recurrence probability of MS≥6.7 earthquakes is about 0.035 in future 100 years along the Haiyuan fault.
Gulev, S.
2015-12-01
Surface turbulent heat fluxes from modern era and first generation reanalyses (NCEP-DOE, ERA-Interim, MERRA NCEP-CFSR, JRA) as well as from satellite products (SEAFLUX, IFREMER, HOAPS) were intercompared using framework of probability distributions for sensible and latent heat fluxes. For approximation of probability distributions and estimation of extreme flux values Modified Fisher-Tippett (MFT) distribution has been used. Besides mean flux values, consideration is given to the comparative analysis of (i) parameters of the MFT probability density functions (scale and location), (ii) extreme flux values corresponding high order percentiles of fluxes (e.g. 99th and higher) and (iii) fractional contribution of extreme surface flux events in the total surface turbulent fluxes integrated over months and seasons. The latter was estimated using both fractional distribution derived from MFT and empirical estimates based upon occurrence histograms. The strongest differences in the parameters of probability distributions of surface fluxes and extreme surface flux values between different reanalyses are found in the western boundary current extension regions and high latitudes, while the highest differences in the fractional contributions of surface fluxes may occur in mid ocean regions being closely associated with atmospheric synoptic dynamics. Generally, satellite surface flux products demonstrate relatively stronger extreme fluxes compared to reanalyses, even in the Northern Hemisphere midlatitudes where data assimilation input in reanalyses is quite dense compared to the Southern Ocean regions. Our assessment also discriminated different reanalyses and satellite products with respect to their ability to quantify the role of extreme surface turbulent fluxes in forming ocean heat release in different regions.
Probability distribution of surface wind speed induced by convective adjustment on Venus
Yamamoto, Masaru
2017-03-01
The influence of convective adjustment on the spatial structure of Venusian surface wind and probability distribution of its wind speed is investigated using an idealized weather research and forecasting model. When the initially uniform wind is much weaker than the convective wind, patches of both prograde and retrograde winds with scales of a few kilometers are formed during active convective adjustment. After the active convective adjustment, because the small-scale convective cells and their related vertical momentum fluxes dissipate quickly, the large-scale (>4 km) prograde and retrograde wind patches remain on the surface and in the longitude-height cross-section. This suggests the coexistence of local prograde and retrograde flows, which may correspond to those observed by Pioneer Venus below 10 km altitude. The probability distributions of surface wind speed V during the convective adjustment have a similar form in different simulations, with a sharp peak around ∼0.1 m s-1 and a bulge developing on the flank of the probability distribution. This flank bulge is associated with the most active convection, which has a probability distribution with a peak at the wind speed 1.5-times greater than the Weibull fitting parameter c during the convective adjustment. The Weibull distribution P(> V) (= exp[-(V/c)k]) with best-estimate coefficients of Lorenz (2016) is reproduced during convective adjustments induced by a potential energy of ∼7 × 107 J m-2, which is calculated from the difference in total potential energy between initially unstable and neutral states. The maximum vertical convective heat flux magnitude is proportional to the potential energy of the convective adjustment in the experiments with the initial unstable-layer thickness altered. The present work suggests that convective adjustment is a promising process for producing the wind structure with occasionally generating surface winds of ∼1 m s-1 and retrograde wind patches.
The force distribution probability function for simple fluids by density functional theory.
Rickayzen, G; Heyes, D M
2013-02-28
Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.
Li Wei; Hai-liang Yang
2004-01-01
In this paper we first consider a risk process in which claim inter-arrival times and the time until the first claim have an Erlang (2)distribution.An explicit solution is derived for the probability of ultimate ruin,given an initial reserve of u when the claim size follows a Pareto distribution.Follow Ramsay [8] ,Laplace transforms and exponential integrals are used to derive the solution,which involves a single integral of real valued functions along the positive real line,and the integrand is not of an oscillating kind.Then we show that the ultimate ruin probability can be expressed as the sum of expected values of functions of two different Gamma random variables.Finally,the results are extended to the Erlang(n)case.Numerical examples are given to illustrate the main results.
Segalman, D.; Reese, G.
1998-09-01
The von Mises stress is often used as the metric for evaluating design margins, particularly for structures made of ductile materials. For deterministic loads, both static and dynamic, the calculation of von Mises stress is straightforward, as is the resulting calculation of reliability. For loads modeled as random processes, the task is different; the response to such loads is itself a random process and its properties must be determined in terms of those of both the loads and the system. This has been done in the past by Monte Carlo sampling of numerical realizations that reproduce the second order statistics of the problem. Here, the authors present a method that provides analytic expressions for the probability distributions of von Mises stress which can be evaluated efficiently and with good precision numerically. Further, this new approach has the important advantage of providing the asymptotic properties of the probability distribution.
Hubig, Michael; Muggenthaler, Holger; Mall, Gita
2014-05-01
Bayesian estimation applied to temperature based death time estimation was recently introduced as conditional probability distribution or CPD-method by Biermann and Potente. The CPD-method is useful, if there is external information that sets the boundaries of the true death time interval (victim last seen alive and found dead). CPD allows computation of probabilities for small time intervals of interest (e.g. no-alibi intervals of suspects) within the large true death time interval. In the light of the importance of the CPD for conviction or acquittal of suspects the present study identifies a potential error source. Deviations in death time estimates will cause errors in the CPD-computed probabilities. We derive formulae to quantify the CPD error as a function of input error. Moreover we observed the paradox, that in cases, in which the small no-alibi time interval is located at the boundary of the true death time interval, adjacent to the erroneous death time estimate, CPD-computed probabilities for that small no-alibi interval will increase with increasing input deviation, else the CPD-computed probabilities will decrease. We therefore advise not to use CPD if there is an indication of an error or a contra-empirical deviation in the death time estimates, that is especially, if the death time estimates fall out of the true death time interval, even if the 95%-confidence intervals of the estimate still overlap the true death time interval.
Reconstructing the three-dimensional local dark matter velocity distribution
Kavanagh, Bradley J
2016-01-01
Directionally sensitive dark matter (DM) direct detection experiments present the only way to observe the full three-dimensional velocity distribution of the Milky Way halo local to Earth. In this work we compare methods for extracting information about the local DM velocity distribution from a set of recoil directions and energies in a range of hypothetical directional and non-directional experiments. We compare a model independent empirical parameterisation of the velocity distribution based on an angular discretisation with a model dependent approach which assumes knowledge of the functional form of the distribution. The methods are tested under three distinct halo models which cover a range of possible phase space structures for the local velocity distribution: a smooth Maxwellian halo, a tidal stream and a debris flow. In each case we use simulated directional data to attempt to reconstruct the shape and parameters describing each model as well as the DM particle properties. We find that the empirical pa...
2014-01-01
The traditional mine microseism locating methods are mainly based on the assumption that the wave velocity is uniform through the space, which leads to some errors for the assumption goes against the laws of nature. In this paper, the wave velocity is regarded as a random variable, and the probability distribution information of the wave velocity is fused into the traditional locating method. This paper puts forwards the microseism source location method for the undersea mining on condition o...
2016-04-26
created using probability distribution functions. This new model performs as well or better than other modern models of the solar wind velocity. In... Physics , 120: 7987-8001, doi: 10.1002/2014JA020962. Abstract: The temporal and spatial variations of the thermospheric mass density during a series of...2015), Theoretical study of zonal differences of electron density at midlatitudes with GITM simulation, J. Geophys. Res. Space Physics , 120, 2951
Pauling resonant structures in real space through electron number probability distributions.
Pendas, A Martín; Francisco, E; Blanco, M A
2007-02-15
A general hierarchy of the coarsed-grained electron probability distributions induced by exhaustive partitions of the physical space is presented. It is argued that when the space is partitioned into atomic regions the consideration of these distributions may provide a first step toward an orbital invariant treatment of resonant structures. We also show that, in this case, the total molecular energy and its components may be partitioned into structure contributions, providing a fruitful extension of the recently developed interacting quantum atoms approach (J. Chem. Theory Comput. 2005, 1, 1096). The above ideas are explored in the hydrogen molecule, where a complete statistical and energetic decomposition into covalent and ionic terms is presented.
Evolving Molecular Cloud Structure and the Column Density Probability Distribution Function
Ward, Rachel L; Sills, Alison
2014-01-01
The structure of molecular clouds can be characterized with the probability distribution function (PDF) of the mass surface density. In particular, the properties of the distribution can reveal the nature of the turbulence and star formation present inside the molecular cloud. In this paper, we explore how these structural characteristics evolve with time and also how they relate to various cloud properties as measured from a sample of synthetic column density maps of molecular clouds. We find that, as a cloud evolves, the peak of its column density PDF will shift to surface densities below the observational threshold for detection, resulting in an underlying lognormal distribution which has been effectively lost at late times. Our results explain why certain observations of actively star-forming, dynamically older clouds, such as the Orion molecular cloud, do not appear to have any evidence of a lognormal distribution in their column density PDFs. We also study the evolution of the slope and deviation point ...
Probability distribution of the entanglement across a cut at an infinite-randomness fixed point
Devakul, Trithep; Majumdar, Satya N.; Huse, David A.
2017-03-01
We calculate the probability distribution of entanglement entropy S across a cut of a finite one-dimensional spin chain of length L at an infinite-randomness fixed point using Fisher's strong randomness renormalization group (RG). Using the random transverse-field Ising model as an example, the distribution is shown to take the form p (S |L ) ˜L-ψ (k ) , where k ≡S /ln[L /L0] , the large deviation function ψ (k ) is found explicitly, and L0 is a nonuniversal microscopic length. We discuss the implications of such a distribution on numerical techniques that rely on entanglement, such as matrix-product-state-based techniques. Our results are verified with numerical RG simulations, as well as the actual entanglement entropy distribution for the random transverse-field Ising model which we calculate for large L via a mapping to Majorana fermions.
Tomadakis, Manolis M.; Robertson, Teri J.
2003-07-01
We present a random walk based investigation of the pore size probability distribution and its moments, the survival probability and mean survival time, and the principal relaxation time, for random and ordered arrays of cylindrical fibers of various orientation distributions. The dimensionless mean survival time, principal relaxation time, mean pore size, and mean square pore size are found to increase with porosity, remain practically independent of the directionality of random fiber beds, and attain lower values for ordered arrays. Wide pore size distributions are obtained for random fiber structures and relatively narrow for ordered square arrays, all in very good agreement with theoretically predicted limiting values. Analytical results derived for the pore size probability and its lower moments for square arrays of fibers practically coincide with the corresponding simulation results. Earlier variational bounds on the mean survival time and principal relaxation time are obeyed by our numerical results in all cases, and are found to be quite sharp up to very high porosities. Dimensionless groups representing the deviation of such bounds from our simulation results vary in practically the same range as the corresponding values reported earlier for beds of spherical particles. A universal scaling expression of the literature relating the mean survival time to the mean pore size [S. Torquato and C. L. Y. Yeong, J. Chem. Phys. 106, 8814 (1997)] agrees very well with our results for all types of fiber structures, thus validated for the first time for anisotropic porous media.
Optimal design of unit hydrographs using probability distribution and genetic algorithms
Rajib Kumar Bhattacharjya
2004-10-01
A nonlinear optimization model is developed to transmute a unit hydrograph into a probability distribution function (PDF). The objective function is to minimize the sum of the square of the deviation between predicted and actual direct runoff hydrograph of a watershed. The predicted runoff hydrograph is estimated by using a PDF. In a unit hydrograph, the depth of rainfall excess must be unity and the ordinates must be positive. Incorporation of a PDF ensures that the depth of rainfall excess for the unit hydrograph is unity, and the ordinates are also positive. Unit hydrograph ordinates are in terms of intensity of rainfall excess on a discharge per unit catchment area basis, the unit area thus representing the unit rainfall excess. The proposed method does not have any constraint. The nonlinear optimization formulation is solved using binary-coded genetic algorithms. The number of variables to be estimated by optimization is the same as the number of probability distribution parameters; gamma and log-normal probability distributions are used. The existing nonlinear programming model for obtaining optimal unit hydrograph has also been solved using genetic algorithms, where the constrained nonlinear optimization problem is converted to an unconstrained problem using penalty parameter approach. The results obtained are compared with those obtained by the earlier LP model and are fairly similar.
Silva, Antonio
2005-03-01
It is well-known that the mathematical theory of Brownian motion was first developed in the Ph. D. thesis of Louis Bachelier for the French stock market before Einstein [1]. In Ref. [2] we studied the so-called Heston model, where the stock-price dynamics is governed by multiplicative Brownian motion with stochastic diffusion coefficient. We solved the corresponding Fokker-Planck equation exactly and found an analytic formula for the time-dependent probability distribution of stock price changes (returns). The formula interpolates between the exponential (tent-shaped) distribution for short time lags and the Gaussian (parabolic) distribution for long time lags. The theoretical formula agrees very well with the actual stock-market data ranging from the Dow-Jones index [2] to individual companies [3], such as Microsoft, Intel, etc. [] [1] Louis Bachelier, ``Th'eorie de la sp'eculation,'' Annales Scientifiques de l''Ecole Normale Sup'erieure, III-17:21-86 (1900).[] [2] A. A. Dragulescu and V. M. Yakovenko, ``Probability distribution of returns in the Heston model with stochastic volatility,'' Quantitative Finance 2, 443--453 (2002); Erratum 3, C15 (2003). [cond-mat/0203046] [] [3] A. C. Silva, R. E. Prange, and V. M. Yakovenko, ``Exponential distribution of financial returns at mesoscopic time lags: a new stylized fact,'' Physica A 344, 227--235 (2004). [cond-mat/0401225
Galaxy distribution in a cold dark matter universe
White, S.D.M.; Davis, M.; Efstathiou, G.; Frenk, C.S.
1987-12-03
The gravitational growth of structure by hierarchical clustering is shown to lead automatically to a bias in the distribution of galaxies. The strength of this bias is such that if the Universe does indeed conform to the cold dark matter model then its density must approach the closure value. The mechanism predicts a significant dependence of the strength of galaxy clustering on the depth of the potential well of the galaxies considered.
Effect of Rain on Probability Distributions Fitted to Vehicle Time Headways
Hashim Mohammed Alhassan
2012-01-01
Full Text Available Time headway data generated from different rain conditions were fitted to probability distributions to see which ones best described the trends in headway behaviour in wet weather. Data was generated from the J5, a principal road in Johor Bahru for two months and the headways in no-rain condition were analysed and compared to the rain generated headway data. The results showed a decrease in headways between no-rain and the rain conditions. Further decreases were observed with increase in rainfall intensity. Thus between no-rain to light rain condition there was 15.66% reduction in the mean headways. Also the mean headway reduction between no-rain and medium rain condition is 19.97% while the reduction between no-rain and heavy rain condition is 25.65%. This trend is already acknowledged in the literature. The Burr probability distribution ranked first amongst five others in describing the trends in headway behaviour during rainfall. It passed the goodness of fit tests for the K-S, A2 and C-S at 95% and 99 % respectively. The scale parameter of the Burr model and the P-value increased as the rain intensity increased. This suggests more vehicular cluster during rainfall with the probability of this occurring increasing with more rain intensity. The coefficient of variation and Skewness also pointed towards increase in vehicle cluster. The Burr Probability Distribution therefore can be applied to model headways in rain and no-rain weather conditions among others.
Burkhart, Blakesley; Murray, Claire; Stanimirovic, Snezana
2015-01-01
The shape of the probability distribution function (PDF) of molecular clouds is an important ingredient for modern theories of star formation and turbulence. Recently, several studies have pointed out observational difficulties with constraining the low column density (i.e. Av <1) PDF using dust tracers. In order to constrain the shape and properties of the low column density probability distribution function, we investigate the PDF of multiphase atomic gas in the Perseus molecular cloud using opacity-corrected GALFA-HI data and compare the PDF shape and properties to the total gas PDF and the N(H2) PDF. We find that the shape of the PDF in the atomic medium of Perseus is well described by a lognormal distribution, and not by a power-law or bimodal distribution. The peak of the atomic gas PDF in and around Perseus lies at the HI-H2 transition column density for this cloud, past which the N(H2) PDF takes on a powerlaw form. We find that the PDF of the atomic gas is narrow and at column densities larger than...
Criticality of the net-baryon number probability distribution at finite density
Kenji Morita
2015-02-01
Full Text Available We compute the probability distribution P(N of the net-baryon number at finite temperature and quark-chemical potential, μ, at a physical value of the pion mass in the quark-meson model within the functional renormalization group scheme. For μ/T<1, the model exhibits the chiral crossover transition which belongs to the universality class of the O(4 spin system in three dimensions. We explore the influence of the chiral crossover transition on the properties of the net baryon number probability distribution, P(N. By considering ratios of P(N to the Skellam function, with the same mean and variance, we unravel the characteristic features of the distribution that are related to O(4 criticality at the chiral crossover transition. We explore the corresponding ratios for data obtained at RHIC by the STAR Collaboration and discuss their implications. We also examine O(4 criticality in the context of binomial and negative-binomial distributions for the net proton number.
Probability distribution functions of turbulence in seepage-affected alluvial channel
Sharma, Anurag; Kumar, Bimlesh
2017-02-01
The present experimental study is carried out on the probability distribution functions (PDFs) of turbulent flow characteristics within near-bed-surface and away-from-bed surfaces for both no seepage and seepage flow. Laboratory experiments were conducted in the plane sand bed for no seepage (NS), 10% seepage (10%S) and 15% seepage (15%) cases. The experimental calculation of the PDFs of turbulent parameters such as Reynolds shear stress, velocity fluctuations, and bursting events is compared with theoretical expression obtained by Gram-Charlier (GC)-based exponential distribution. Experimental observations follow the computed PDF distributions for both no seepage and seepage cases. Jensen-Shannon divergence (JSD) method is used to measure the similarity between theoretical and experimental PDFs. The value of JSD for PDFs of velocity fluctuation lies between 0.0005 to 0.003 while the JSD value for PDFs of Reynolds shear stress varies between 0.001 to 0.006. Even with the application of seepage, the PDF distribution of bursting events, sweeps and ejections are well characterized by the exponential distribution of the GC series, except that a slight deflection of inward and outward interactions is observed which may be due to weaker events. The value of JSD for outward and inward interactions ranges from 0.0013 to 0.032, while the JSD value for sweep and ejection events varies between 0.0001 to 0.0025. The theoretical expression for the PDF of turbulent intensity is developed in the present study, which agrees well with the experimental observations and JSD lies between 0.007 and 0.015. The work presented is potentially applicable to the probability distribution of mobile-bed sediments in seepage-affected alluvial channels typically characterized by the various turbulent parameters. The purpose of PDF estimation from experimental data is that it provides a complete numerical description in the areas of turbulent flow either at a single or finite number of points.
Fonseca Rasmus
2009-10-01
Full Text Available Abstract Background Predicting the three-dimensional structure of a protein from its amino acid sequence is currently one of the most challenging problems in bioinformatics. The internal structure of helices and sheets is highly recurrent and help reduce the search space significantly. However, random coil segments make up nearly 40% of proteins and they do not have any apparent recurrent patterns, which complicates overall prediction accuracy of protein structure prediction methods. Luckily, previous work has indicated that coil segments are in fact not completely random in structure and flanking residues do seem to have a significant influence on the dihedral angles adopted by the individual amino acids in coil segments. In this work we attempt to predict a probability distribution of these dihedral angles based on the flanking residues. While attempts to predict dihedral angles of coil segments have been done previously, none have, to our knowledge, presented comparable results for the probability distribution of dihedral angles. Results In this paper we develop an artificial neural network that uses an input-window of amino acids to predict a dihedral angle probability distribution for the middle residue in the input-window. The trained neural network shows a significant improvement (4-68% in predicting the most probable bin (covering a 30° × 30° area of the dihedral angle space for all amino acids in the data set compared to baseline statistics. An accuracy comparable to that of secondary structure prediction (≈ 80% is achieved by observing the 20 bins with highest output values. Conclusion Many different protein structure prediction methods exist and each uses different tools and auxiliary predictions to help determine the native structure. In this work the sequence is used to predict local context dependent dihedral angle propensities in coil-regions. This predicted distribution can potentially improve tertiary structure prediction
Evolution Equation for a Joint Tomographic Probability Distribution of Spin-1 Particles
Korennoy, Ya. A.; Man'ko, V. I.
2016-11-01
The nine-component positive vector optical tomographic probability portrait of quantum state of spin-1 particles containing full spatial and spin information about the state without redundancy is constructed. Also the suggested approach is expanded to symplectic tomography representation and to representations with quasidistributions like Wigner function, Husimi Q-function, and Glauber-Sudarshan P-function. The evolution equations for constructed vector optical and symplectic tomograms and vector quasidistributions for arbitrary Hamiltonian are found. The evolution equations are also obtained in special case of the quantum system of charged spin-1 particle in arbitrary electro-magnetic field, which are analogs of non-relativistic Proca equation in appropriate representations. The generalization of proposed approach to the cases of arbitrary spin is discussed. The possibility of formulation of quantum mechanics of the systems with spins in terms of joint probability distributions without the use of wave functions or density matrices is explicitly demonstrated.
Cheng, Weiwei
2011-01-01
We consider an extension of the setting of label ranking, in which the learner is allowed to make predictions in the form of partial instead of total orders. Predictions of that kind are interpreted as a partial abstention: If the learner is not sufficiently certain regarding the relative order of two alternatives, it may abstain from this decision and instead declare these alternatives as being incomparable. We propose a new method for learning to predict partial orders that improves on an existing approach, both theoretically and empirically. Our method is based on the idea of thresholding the probabilities of pairwise preferences between labels as induced by a predicted (parameterized) probability distribution on the set of all rankings.
Hobbs, Jennifer A; Towal, R Blythe; Hartmann, Mitra J Z
2015-08-01
Analysis of natural scene statistics has been a powerful approach for understanding neural coding in the auditory and visual systems. In the field of somatosensation, it has been more challenging to quantify the natural tactile scene, in part because somatosensory signals are so tightly linked to the animal's movements. The present work takes a step towards quantifying the natural tactile scene for the rat vibrissal system by simulating rat whisking motions to systematically investigate the probabilities of whisker-object contact in naturalistic environments. The simulations permit an exhaustive search through the complete space of possible contact patterns, thereby allowing for the characterization of the patterns that would most likely occur during long sequences of natural exploratory behavior. We specifically quantified the probabilities of 'concomitant contact', that is, given that a particular whisker makes contact with a surface during a whisk, what is the probability that each of the other whiskers will also make contact with the surface during that whisk? Probabilities of concomitant contact were quantified in simulations that assumed increasingly naturalistic conditions: first, the space of all possible head poses; second, the space of behaviorally preferred head poses as measured experimentally; and third, common head poses in environments such as cages and burrows. As environments became more naturalistic, the probability distributions shifted from exhibiting a 'row-wise' structure to a more diagonal structure. Results also reveal that the rat appears to use motor strategies (e.g. head pitches) that generate contact patterns that are particularly well suited to extract information in the presence of uncertainty. © 2015. Published by The Company of Biologists Ltd.
The probability distribution functions of emission line flux measurements and their ratios
Wesson, R; Scicluna, P
2016-01-01
Many physical parameters in astrophysics are derived using the ratios of two observed quantities. If the relative uncertainties on measurements are small enough, uncertainties can be propagated analytically using simplifying assumptions, but for large normally distributed uncertainties, the probability distribution of the ratio become skewed, with a modal value offset from that expected in Gaussian uncertainty propagation. Furthermore, the most likely value of a ratio A/B is not equal to the reciprocal of the most likely value of B/A. The effect is most pronounced when the uncertainty on the denominator is larger than that on the numerator. We show that this effect is seen in an analysis of 12,126 spectra from the Sloan Digital Sky Survey. The intrinsically fixed ratio of the [O III] lines at 4959 and 5007 ${\\AA}$ is conventionally expressed as the ratio of the stronger line to the weaker line. Thus, the uncertainty on the denominator is larger, and non-Gaussian probability distributions result. By taking thi...
Distribution of magnetically confined circumstellar matter in oblique rotators
Preuss, O; Holzwarth, V R; Solanki, S K
2004-01-01
We consider the mechanical equilibrium and stability of matter trapped in the magnetosphere of a rapidly rotating star. Assuming a dipolar magnetic field and arbitrary inclination of the magnetic axis with respect to the axis of rotation we find stable equilibrium positions a) in a (warped) disk roughly aligned with the magnetic equatorial plane and b) at two locations above and below the disk, whose distance from the star increases with decreasing inclination angle between dipole and rotation axis. The distribution of matter is not strongly affected by allowing for a spatial offset of the magnetic dipole. These results provide a possible explanation for some observations of corotating localized mass concentrations in hot magnetic stars.
Smail, Linda
2016-06-01
The basic task of any probabilistic inference system in Bayesian networks is computing the posterior probability distribution for a subset or subsets of random variables, given values or evidence for some other variables from the same Bayesian network. Many methods and algorithms have been developed to exact and approximate inference in Bayesian networks. This work compares two exact inference methods in Bayesian networks-Lauritzen-Spiegelhalter and the successive restrictions algorithm-from the perspective of computational efficiency. The two methods were applied for comparison to a Chest Clinic Bayesian Network. Results indicate that the successive restrictions algorithm shows more computational efficiency than the Lauritzen-Spiegelhalter algorithm.
Finite de Finetti theorem for conditional probability distributions describing physical theories
Christandl, Matthias; Toner, Ben
2009-04-01
We work in a general framework where the state of a physical system is defined by its behavior under measurement and the global state is constrained by no-signaling conditions. We show that the marginals of symmetric states in such theories can be approximated by convex combinations of independent and identical conditional probability distributions, generalizing the classical finite de Finetti theorem of Diaconis and Freedman. Our results apply to correlations obtained from quantum states even when there is no bound on the local dimension, so that known quantum de Finetti theorems cannot be used.
Discrete coherent states and probability distributions in finite-dimensional spaces
Galetti, D.; Marchiolli, M.A.
1995-06-01
Operator bases are discussed in connection with the construction of phase space representatives of operators in finite-dimensional spaces and their properties are presented. It is also shown how these operator bases allow for the construction of a finite harmonic oscillator-like coherent state. Creation and annihilation operators for the Fock finite-dimensional space are discussed and their expressions in terms of the operator bases are explicitly written. The relevant finite-dimensional probability distributions are obtained and their limiting behavior for an infinite-dimensional space are calculated which agree with the well know results. (author). 20 refs, 2 figs.
On the Meta Distribution of Coverage Probability in Uplink Cellular Networks
Elsawy, Hesham
2017-04-07
This letter studies the meta distribution of coverage probability (CP), within a stochastic geometry framework, for cellular uplink transmission with fractional path-loss inversion power control. Using the widely accepted Poisson point process (PPP) for modeling the spatial locations of base stations (BSs), we obtain the percentiles of users that achieve a target uplink CP over an arbitrary, but fixed, realization of the PPP. To this end, the effect of the users activity factor (p) and the path-loss compensation factor () on the uplink performance are analyzed. The results show that decreasing p and/or increasing reduce the CP variation around the spatially averaged value.
Spectra and probability distributions of thermal flux in turbulent Rayleigh-B\\'{e}nard convection
Pharasi, Hirdesh K; Kumar, Krishna; Bhattacharjee, Jayanta K
2016-01-01
The spectra of turbulent heat flux $\\mathrm{H}(k)$ in Rayleigh-B\\'{e}nard convection with and without uniform rotation are presented. The spectrum $\\mathrm{H}(k)$ scales with wave number $k$ as $\\sim k^{-2}$. The scaling exponent is almost independent of the Taylor number $\\mathrm{Ta}$ and Prandtl number $\\mathrm{Pr}$ for higher values of the reduced Rayleigh number $r$ ($ > 10^3$). The exponent, however, depends on $\\mathrm{Ta}$ and $\\mathrm{Pr}$ for smaller values of $r$ ($<10^3$). The probability distribution functions of the local heat fluxes are non-Gaussian and have exponential tails.
Wind speed analysis in La Vainest, Mexico: a bimodal probability distribution case
Jaramillo, O.A.; Borja, M.A. [Energias No Convencionales, Morelos (Mexico). Instituto de Investigaciones Electricas
2004-08-01
The statistical characteristics of the wind speed in La Vainest, Oxoic, Mexico, have been analyzed by using wind speed data recorded by Instituto de Investigaciones Electricas (IIE). By grouping the observations by annual, seasonal and wind direction, we show that the wind speed distribution, with calms included, is not represented by the typical two-parameter Weibull function. A mathematical formulation by using a bimodal Weibull and Weibull probability distribution function (PDF) has been developed to analyse the wind speed frequency distribution in that region. The model developed here can be applied for similar regions where the wind speed distribution presents a bimodal PDF. The two-parameter Weibull wind speed distribution must not be generalised, since it is not accurate to represent some wind regimes as the case of La Ventosa, Mexico. The analysis of wind data shows that computing the capacity factor for wind power plants to be installed in La Ventosa must be carded out by means of a bimodal PDF instead of the typical Weibull PDF. Otherwise, the capacity factor will be underestimated. (author)
Long-Term Probability Distribution of Wind Turbine Planetary Bearing Loads (Poster)
Jiang, Z.; Xing, Y.; Guo, Y.; Dong, W.; Moan, T.; Gao, Z.
2013-04-01
Among the various causes of bearing damage and failure, metal fatigue of the rolling contact surface is the dominant failure mechanism. The fatigue life is associated with the load conditions under which wind turbines operate in the field. Therefore, it is important to understand the long-term distribution of the bearing loads under various environmental conditions. The National Renewable Energy Laboratory's 750-kW Gearbox Reliability Collaborative wind turbine is studied in this work. A decoupled analysis using several computer codes is carried out. The global aero-elastic simulations are performed using HAWC2. The time series of the drivetrain loads and motions from the global dynamic analysis are fed to a drivetrain model in SIMPACK. The time-varying internal pressure distribution along the raceway is obtained analytically. A series of probability distribution functions are then used to fit the long-term statistical distribution at different locations along raceways. The long-term distribution of the bearing raceway loads are estimated under different environmental conditions. Finally, the bearing fatigue lives are calculated.
Study of the SEMG probability distribution of the paretic tibialis anterior muscle
Cherniz, Analía S.; Bonell, Claudia E.; Tabernig, Carolina B.
2007-11-01
The surface electromyographic signal is a stochastic signal that has been modeled as a Gaussian process, with a zero mean. It has been experimentally proved that this probability distribution can be adjusted with less error to a Laplacian type distribution. The selection of estimators for the detection of changes in the amplitude of the muscular signal depends, among other things, on the type of distribution. In the case of subjects with lesions to the superior motor neuron, the lack of central control affects the muscular tone, the force and the patterns of muscular movement involved in activities such as the gait cycle. In this work, the distribution types of the SEMG signal amplitudes of the tibialis anterior muscle are evaluated during gait, both in two healthy subjects and in two hemiparetic ones in order to select the estimators that best characterize them. It was observed that the Laplacian distribution function would be the one that best adjusts to the experimental data in the studied subjects, although this largely depends on the subject and on the data segment analyzed.
Study of the SEMG probability distribution of the paretic tibialis anterior muscle
Cherniz, AnalIa S; Bonell, Claudia E; Tabernig, Carolina B [Laboratorio de Ingenieria de Rehabilitacion e Investigaciones Neuromusculares y Sensoriales, Facultad de Ingenieria, UNER, Oro Verde (Argentina)
2007-11-15
The surface electromyographic signal is a stochastic signal that has been modeled as a Gaussian process, with a zero mean. It has been experimentally proved that this probability distribution can be adjusted with less error to a Laplacian type distribution. The selection of estimators for the detection of changes in the amplitude of the muscular signal depends, among other things, on the type of distribution. In the case of subjects with lesions to the superior motor neuron, the lack of central control affects the muscular tone, the force and the patterns of muscular movement involved in activities such as the gait cycle. In this work, the distribution types of the SEMG signal amplitudes of the tibialis anterior muscle are evaluated during gait, both in two healthy subjects and in two hemiparetic ones in order to select the estimators that best characterize them. It was observed that the Laplacian distribution function would be the one that best adjusts to the experimental data in the studied subjects, although this largely depends on the subject and on the data segment analyzed.
EVALUATION OF THE PROBABILITY DISTRIBUTION OF PITTING CORROSION FATIGUE LIFE IN AIRCRAFT MATERIALS
WANG Qingyuan (王清远); N.KAWAGOISHI; Q.CHEN; R.M.PIDAPARTI
2003-01-01
Corrosion and fatigue properties of aircraft materials are known to have a considerable scatter due to the random nature of materials,loading,and environmental conditions.A probabilistic approach for predicting the pitting corrosion fatigue life has been investigated which captures the effect of the interaction of the cyclic load and corrosive environment and all stages of the corrosion fatigue process (i.e.the pit nucleation and growth,pit-crack transition,short- and long-crack propagation).The probabilistic model investigated considers the uncertainties in the initial pit size,corrosion pitting current,and material properties due to the scatter found in the experimental data.Monte Carlo simulations were performed to define the failure probability distribution.Predicted cumulative distribution functions of fatigue life agreed reasonably well with the existing experimental data.
Reconstructing the three-dimensional local dark matter velocity distribution
Kavanagh, Bradley J.; O'Hare, Ciaran A. J.
2016-12-01
Directionally sensitive dark matter (DM) direct detection experiments present the only way to observe the full three-dimensional velocity distribution of the Milky Way halo local to Earth. In this work we compare methods for extracting information about the local DM velocity distribution from a set of recoil directions and energies in a range of hypothetical directional and nondirectional experiments. We compare a model-independent empirical parametrization of the velocity distribution based on an angular discretization with a model-dependent approach which assumes knowledge of the functional form of the distribution. The methods are tested under three distinct halo models which cover a range of possible phase space structures for the local velocity distribution: a smooth Maxwellian halo, a tidal stream and a debris flow. In each case we use simulated directional data to attempt to reconstruct the shape and parameters describing each model as well as the DM particle properties. We find that the empirical parametrization is able to make accurate unbiased reconstructions of the DM mass and cross section as well as capture features in the underlying velocity distribution in certain directions without any assumptions about its true functional form. We also find that by extracting directionally averaged velocity parameters with this method one can discriminate between halo models with different classes of substructure.
Wave Packet Dynamics in the Infinite Square Well with the Wigner Quasi-probability Distribution
Belloni, Mario; Doncheski, Michael; Robinett, Richard
2004-05-01
Over the past few years a number of authors have been interested in the time evolution and revivals of Gaussian wave packets in one-dimensional infinite wells and in two-dimensional infinite wells of various geometries. In all of these circumstances, the wave function is guaranteed to revive at a time related to the inverse of the system's ground state energy, if not sooner. To better visualize these revivals we have calculated the time-dependent Wigner quasi-probability distribution for position and momentum, P_W(x; p), for Gaussian wave packet solutions of this system. The Wigner quasi-probability distribution clearly demonstrates the short-term semi-classical time dependence, as well as longer-term revival behavior and the structure during the collapsed state. This tool also provides an excellent way of demonstrating the patterns of highly-correlated Schrödinger-cat-like `mini-packets' which appear at fractional multiples of the exact revival time. This research is supported in part by a Research Corporation Cottrell College Science Award (CC5470) and the National Science Foundation under contracts DUE-0126439 and DUE-9950702.
Rani K
2014-08-01
Full Text Available Photocopy documents are very common in our normal life. People are permitted to carry and present photocopied documents to avoid damages to the original documents. But this provision is misused for temporary benefits by fabricating fake photocopied documents. Fabrication of fake photocopied document is possible only in 2nd and higher order recursive order of photocopies. Whenever a photocopied document is submitted, it may be required to check its originality. When the document is 1st order photocopy, chances of fabrication may be ignored. On the other hand when the photocopy order is 2nd or above, probability of fabrication may be suspected. Hence when a photocopy document is presented, the recursive order number of photocopy is to be estimated to ascertain the originality. This requirement demands to investigate methods to estimate order number of photocopy. In this work, a voting based approach is used to detect the recursive order number of the photocopy document using probability distributions exponential, extreme values and lognormal distributions is proposed. A detailed experimentation is performed on a generated data set and the method exhibits efficiency close to 89%.
Tygert, Mark
2010-09-21
We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).
Jayajit Das '
2015-07-01
Full Text Available A common statistical situation concerns inferring an unknown distribution Q(x from a known distribution P(y, where X (dimension n, and Y (dimension m have a known functional relationship. Most commonly, n ≤ m, and the task is relatively straightforward for well-defined functional relationships. For example, if Y1 and Y2 are independent random variables, each uniform on [0, 1], one can determine the distribution of X = Y1 + Y2; here m = 2 and n = 1. However, biological and physical situations can arise where n > m and the functional relation Y→X is non-unique. In general, in the absence of additional information, there is no unique solution to Q in those cases. Nevertheless, one may still want to draw some inferences about Q. To this end, we propose a novel maximum entropy (MaxEnt approach that estimates Q(x based only on the available data, namely, P(y. The method has the additional advantage that one does not need to explicitly calculate the Lagrange multipliers. In this paper we develop the approach, for both discrete and continuous probability distributions, and demonstrate its validity. We give an intuitive justification as well, and we illustrate with examples.
Low mass enhanced probability of pion in hadronic matter due to its Landau cut contributions
Ghosh, Sabyasachi
2015-01-01
In the real-time thermal field theory, the pion self-energy at finite temperature and density is evaluated where the different mesonic and baryonic loops are considered. The interactions of pion with the other mesons and baryons in the medium are governed by the effective hadronic Lagrangian densities whose effective strength of coupling constants have been determined from the experimental decay widths of the mesons and baryons. The detail branch cut structures of these different mesonic and baryonic loops are analyzed. The Landau cut contributions of different baryon and meson loops become only relevant around the pion pole and it is completely appeared in presence of medium. The in-medium spectral function of pion has been plotted for different values of temperature, baryon chemical potential as well as three momentum of the pion. A noticeable low mass probability in pion spectral function promise to contribute in the low mass dilepton enhancement via indirect modification of $\\rho$ self-energy for $\\pi\\pi$...
Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events.
Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon
2016-01-01
Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate.
The Distribution and Annihilation of Dark Matter Around Black Holes
Schnittman, Jeremy D.
2015-01-01
We use a Monte Carlo code to calculate the geodesic orbits of test particles around Kerr black holes, generating a distribution function of both bound and unbound populations of dark matter (DM) particles. From this distribution function, we calculate annihilation rates and observable gamma-ray spectra for a few simple DM models. The features of these spectra are sensitive to the black hole spin, observer inclination, and detailed properties of the DM annihilation cross-section and density profile. Confirming earlier analytic work, we find that for rapidly spinning black holes, the collisional Penrose process can reach efficiencies exceeding 600%, leading to a high-energy tail in the annihilation spectrum. The high particle density and large proper volume of the region immediately surrounding the horizon ensures that the observed flux from these extreme events is non-negligible.
Flanagan, Éanna É; Wasserman, Ira; Vanderveld, R Ali
2011-01-01
We study the fluctuations in luminosity distances due to gravitational lensing by large scale (> 35 Mpc) structures, specifically voids and sheets. We use a simplified "Swiss cheese" model consisting of a \\Lambda -CDM Friedman-Robertson-Walker background in which a number of randomly distributed non-overlapping spherical regions are replaced by mass compensating comoving voids, each with a uniform density interior and a thin shell of matter on the surface. We compute the distribution of magnitude shifts using a variant of the method of Holz & Wald (1998), which includes the effect of lensing shear. The standard deviation of this distribution is ~ 0.027 magnitudes and the mean is ~ 0.003 magnitudes for voids of radius 35 Mpc, sources at redshift z_s=1.0, with the voids chosen so that 90% of the mass is on the shell today. The standard deviation varies from 0.005 to 0.06 magnitudes as we vary the void size, source redshift, and fraction of mass on the shells today. If the shell walls are given a finite thic...
Sasaki, Tomohiko; Kondo, Osamu
2016-03-01
In paleodemography, the Bayesian approach has been suggested to provide an effective means by which mortality profiles of past populations can be adequately estimated, and thus avoid problems of "age-mimicry" inherent in conventional approaches. In this study, we propose an application of the Gompertz model using an "informative" prior probability distribution by revising a recent example of the Bayesian approach based on an "uninformative" distribution. Life-table data of 134 human populations including those of contemporary hunter-gatherers were used to determine the Gompertz parameters of each population. In each population, we used both raw life-table data and the Gompertz parameters to calculate some demographic values such as the mean life-span, to confirm representativeness of the model. Then, the correlation between the two Gompertz parameters (the Strehler-Mildvan correlation) was re-established. We incorporated the correlation into the Bayesian approach as an "informative" prior probability distribution, and tested its effectiveness using simulated data. Our analyses showed that the mean life-span (≥ age 15) and the proportion of living persons aging over 45 were well-reproduced by the Gompertz model. The simulation showed that using the correlation as an informative prior provides a narrower estimation range in the Bayesian approach than does the uninformative prior. The Gompertz model can be assumed to accurately estimate the mean life-span and/or the proportion of old people in a population. We suggest that the Strehler-Mildvan correlation can be used as a useful constraint in demographic reconstructions of past human populations. © 2015 Wiley Periodicals, Inc.
Yura, Harold; Hanson, Steen Grüner
2012-01-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the......Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set...... with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative...
Modeling the probability distribution of positional errors incurred by residential address geocoding
Mazumdar Soumya
2007-01-01
Full Text Available Abstract Background The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Results Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m than 100%-matched automated geocoding (median error length = 168 m. The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Conclusion Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.
On the probability distribution of daily streamflow in the United States
Blum, Annalise G.; Archfield, Stacey A.; Vogel, Richard M.
2017-01-01
Daily streamflows are often represented by flow duration curves (FDCs), which illustrate the frequency with which flows are equaled or exceeded. FDCs have had broad applications across both operational and research hydrology for decades; however, modeling FDCs has proven elusive. Daily streamflow is a complex time series with flow values ranging over many orders of magnitude. The identification of a probability distribution that can approximate daily streamflow would improve understanding of the behavior of daily flows and the ability to estimate FDCs at ungaged river locations. Comparisons of modeled and empirical FDCs at nearly 400 unregulated, perennial streams illustrate that the four-parameter kappa distribution provides a very good representation of daily streamflow across the majority of physiographic regions in the conterminous United States (US). Further, for some regions of the US, the three-parameter generalized Pareto and lognormal distributions also provide a good approximation to FDCs. Similar results are found for the period of record FDCs, representing the long-term hydrologic regime at a site, and median annual FDCs, representing the behavior of flows in a typical year.
Detection of two power-law tails in the probability distribution functions of massive GMCs
Schneider, N; Girichidis, P; Rayner, T; Motte, F; Andre, P; Russeil, D; Abergel, A; Anderson, L; Arzoumanian, D; Benedettini, M; Csengeri, T; Didelon, P; Francesco, J D; Griffin, M; Hill, T; Klessen, R S; Ossenkopf, V; Pezzuto, S; Rivera-Ingraham, A; Spinoglio, L; Tremblin, P; Zavagno, A
2015-01-01
We report the novel detection of complex high-column density tails in the probability distribution functions (PDFs) for three high-mass star-forming regions (CepOB3, MonR2, NGC6334), obtained from dust emission observed with Herschel. The low column density range can be fit with a lognormal distribution. A first power-law tail starts above an extinction (Av) of ~6-14. It has a slope of alpha=1.3-2 for the rho~r^-alpha profile for an equivalent density distribution (spherical or cylindrical geometry), and is thus consistent with free-fall gravitational collapse. Above Av~40, 60, and 140, we detect an excess that can be fitted by a flatter power law tail with alpha>2. It correlates with the central regions of the cloud (ridges/hubs) of size ~1 pc and densities above 10^4 cm^-3. This excess may be caused by physical processes that slow down collapse and reduce the flow of mass towards higher densities. Possible are: 1. rotation, which introduces an angular momentum barrier, 2. increasing optical depth and weaker...
Probability distribution of turbulence in curvilinear cross section mobile bed channel.
Sharma, Anurag; Kumar, Bimlesh
2016-01-01
The present study investigates the probability density functions (PDFs) of two-dimensional turbulent velocity fluctuations, Reynolds shear stress (RSS) and conditional RSSs in threshold channel obtained by using Gram-Charlier (GC) series. The GC series expansion has been used up to the moments of order four to include the skewness and kurtosis. Experiments were carried out in the curvilinear cross section sand bed channel at threshold condition with uniform sand size of d50 = 0.418 mm. The result concludes that the PDF distributions of turbulent velocity fluctuations and RSS calculated theoretically based on GC series expansion satisfied the PDFs obtained from the experimental data. The PDF distribution of conditional RSSs related to the ejections and sweeps are well represented by the GC series exponential distribution, except that a slight departure of inward and outward interactions is observed, which may be due to weaker events. This paper offers some new insights into the probabilistic mechanism of sediment transport, which can be helpful in sediment management and design of curvilinear cross section mobile bed channel.
Li, Ning; Liu, Xueqin; Xie, Wei; Wu, Jidong; Zhang, Peng
2013-01-01
New features of natural disasters have been observed over the last several years. The factors that influence the disasters' formation mechanisms, regularity of occurrence and main characteristics have been revealed to be more complicated and diverse in nature than previously thought. As the uncertainty involved increases, the variables need to be examined further. This article discusses the importance and the shortage of multivariate analysis of natural disasters and presents a method to estimate the joint probability of the return periods and perform a risk analysis. Severe dust storms from 1990 to 2008 in Inner Mongolia were used as a case study to test this new methodology, as they are normal and recurring climatic phenomena on Earth. Based on the 79 investigated events and according to the dust storm definition with bivariate, the joint probability distribution of severe dust storms was established using the observed data of maximum wind speed and duration. The joint return periods of severe dust storms were calculated, and the relevant risk was analyzed according to the joint probability. The copula function is able to simulate severe dust storm disasters accurately. The joint return periods generated are closer to those observed in reality than the univariate return periods and thus have more value in severe dust storm disaster mitigation, strategy making, program design, and improvement of risk management. This research may prove useful in risk-based decision making. The exploration of multivariate analysis methods can also lay the foundation for further applications in natural disaster risk analysis. © 2012 Society for Risk Analysis.
Han Liwei
2014-07-01
Full Text Available Monitoring data on an earth-rockfill dam constitutes a form of spatial data. Such data include much uncertainty owing to the limitation of measurement information, material parameters, load, geometry size, initial conditions, boundary conditions and the calculation model. So the cloud probability density of the monitoring data must be addressed. In this paper, the cloud theory model was used to address the uncertainty transition between the qualitative concept and the quantitative description. Then an improved algorithm of cloud probability distribution density based on a backward cloud generator was proposed. This was used to effectively convert certain parcels of accurate data into concepts which can be described by proper qualitative linguistic values. Such qualitative description was addressed as cloud numerical characteristics-- {Ex, En, He}, which could represent the characteristics of all cloud drops. The algorithm was then applied to analyze the observation data of a piezometric tube in an earth-rockfill dam. And experiment results proved that the proposed algorithm was feasible, through which, we could reveal the changing regularity of piezometric tube’s water level. And the damage of the seepage in the body was able to be found out.
Becerril, Laura; Cappello, Annalisa; Galindo, Inés; Neri, Marco; Del Negro, Ciro
2013-05-01
The 2011 submarine eruption that took place in the proximity of El Hierro Island (Canary Islands, Spain) has raised the need to identify the most likely future emission zones even on volcanoes characterized by low frequency activity. Here, we propose a probabilistic method to build the susceptibility map of El Hierro, i.e. the spatial distribution of vent opening for future eruptions, based on the probabilistic analysis of volcano-structural data of the Island collected through new fieldwork measurements, bathymetric information, as well as analysis of geological maps, orthophotos and aerial photographs. These data have been divided into different datasets and converted into separate and weighted probability density functions, which were included in a non-homogeneous Poisson process to produce the volcanic susceptibility map. The most likely area to host new eruptions in El Hierro is in the south-western part of the West rift. High probability locations are also found in the Northeast and South rifts, and along the submarine parts of the rifts. This map represents the first effort to deal with the volcanic hazard at El Hierro and can be a support tool for decision makers in land planning, emergency measures and civil defense actions.
Helles, Glennie; Fonseca, Rasmus
2009-01-01
Predicting the three-dimensional structure of a protein from its amino acid sequence is currently one of the most challenging problems in bioinformatics. The internal structure of helices and sheets is highly recurrent and help reduce the search space significantly. However, random coil segments...... make up nearly 40\\% of proteins, and they do not have any apparent recurrent patterns which complicates overall prediction accuracy of protein structure prediction methods. Luckily, previous work has indicated that coil segments are in fact not completely random in structure and flanking residues do...... seem to have a significant influence on the dihedral angles adopted by the individual amino acids in coil segments. In this work we attempt to predict a probability distribution of these dihedral angles based on the flanking residues. While attempts to predict dihedral angles of coil segments have been...
LI Hai-Xia; CHENG Chuan-Fu
2011-01-01
@@ We study the light scattering of an orthogonal anisotropic rough surface with secondary most probable slope distribution It is found that the scattered intensity profiles have obvious secondary maxima, and in the direction perpendicular to the plane of incidence, the secondary maxima are oriented in a curve on the observation plane,which is called the orientation curve.By numerical calculation of the scattering wave fields with the height data of the sample, it is validated that the secondary maxima are induced by the side face element, which constitutes the prismoid structure of the anisotropic surface.We derive the equation of the quadratic orientation curve.Experimentally, we construct the system for light scattering measurement using a CCD.The scattered intensity profiles are extracted from the images at different angles of incidence along the orientation curves.The experimental results conform to the theory.
EVALUATION OF THE PROBABILITY DISTRIBUTION OF PITTING CORROSION FATIGUE LIFE IN AIRCRAFT MATERIALS
王清远; N.KAWAGOISHI; Q.CHEN; R.M.PIDAPARTI
2003-01-01
Corrosion and fatigue properties of aircraft materials axe known to have a considerablescatter due to the random nature of materials, loading, and environmental conditions. A probabilisticapproach for predicting the pitting corrosion fatigue life has been investigated which captures the effectof the interaction of the cyclic load and corrosive environment and all stages of the corrosion fatigueprocess (i.e. the pit nucleation and growth, pit-crack transition, short- and long-crack propagation).The probabilistic model investigated considers the uncertainties in the initial pit size, corrosion pittingcurrent, and material properties due to the scatter found in the experimental data. Monte Carlo simu-lations were performed to define the failure probability distribution. Predicted cumulative distributionfunctions of fatigue life agreed reasonably well with the existing experimental data.
Andrade, Daniel
2012-01-01
We present a new method to propagate lower bounds on conditional probability distributions in conventional Bayesian networks. Our method guarantees to provide outer approximations of the exact lower bounds. A key advantage is that we can use any available algorithms and tools for Bayesian networks in order to represent and infer lower bounds. This new method yields results that are provable exact for trees with binary variables, and results which are competitive to existing approximations in credal networks for all other network structures. Our method is not limited to a specific kind of network structure. Basically, it is also not restricted to a specific kind of inference, but we restrict our analysis to prognostic inference in this article. The computational complexity is superior to that of other existing approaches.
Dai, Mi; Wang, Yun
2016-06-01
In order to obtain robust cosmological constraints from Type Ia supernova (SN Ia) data, we have applied Markov Chain Monte Carlo (MCMC) to SN Ia lightcurve fitting. We develop a method for sampling the resultant probability density distributions (pdf) of the SN Ia lightcuve parameters in the MCMC likelihood analysis to constrain cosmological parameters, and validate it using simulated data sets. Applying this method to the `joint lightcurve analysis (JLA)' data set of SNe Ia, we find that sampling the SN Ia lightcurve parameter pdf's leads to cosmological parameters closer to that of a flat Universe with a cosmological constant, compared to the usual practice of using only the best-fitting values of the SN Ia lightcurve parameters. Our method will be useful in the use of SN Ia data for precision cosmology.
Arnaut, L R
2006-01-01
Using a TE/TM decomposition for an angular plane-wave spectrum of free random electromagnetic waves and matched boundary conditions, we derive the probability density function for the energy density of the vector electric field in the presence of a semi-infinite isotropic medium. The theoretical analysis is illustrated with calculations and results for good electric conductors and for a lossless dielectric half-space. The influence of the permittivity and conductivity on the intensity, random polarization, statistical distribution and standard deviation of the field is investigated, both for incident plus reflected fields and for refracted fields. External refraction is found to result in compression of the fluctuations of the random field.
Seto, Naoki
2014-01-01
We analytically discuss probability distribution function (PDF) for inclinations of merging compact binaries whose gravitational waves are coherently detected by a network of ground based interferometers. The PDF would be useful for studying prospects of (1) simultaneously detecting electromagnetic signals (such as gamma-ray-bursts) associated with binary mergers and (2) statistically constraining the related theoretical models from the actual observational data of multi-messenger astronomy. Our approach is similar to Schutz (2011), but we explicitly include the dependence of the polarization angles of the binaries, based on the concise formulation given in Cutler and Flanagan (1994). We find that the overall profiles of the PDFs are similar for any networks composed by the second generation detectors (Advanced-LIGO, Advanced-Virgo, KAGRA, LIGO-India). For example, 5.1% of detected binaries would have inclination angle less than 10 degree with at most 0.1% differences between the potential networks. A perturb...
Ossenkopf, Volker; Schneider, Nicola; Federrath, Christoph; Klessen, Ralf S
2016-01-01
Probability distribution functions (PDFs) of column densities are an established tool to characterize the evolutionary state of interstellar clouds. Using simulations, we show to what degree their determination is affected by noise, line-of-sight contamination, field selection, and the incomplete sampling in interferometric measurements. We solve the integrals that describe the convolution of a cloud PDF with contaminating sources and study the impact of missing information on the measured column density PDF. The effect of observational noise can be easily estimated and corrected for if the root mean square (rms) of the noise is known. For $\\sigma_{noise}$ values below 40\\,\\% of the typical cloud column density, $N_{peak}$, this involves almost no degradation of the accuracy of the PDF parameters. For higher noise levels and narrow cloud PDFs the width of the PDF becomes increasingly uncertain. A contamination by turbulent foreground or background clouds can be removed as a constant shield if the PDF of the c...
熊峻江; 武哲; 高镇同
2002-01-01
According to the traditional fatigue constant life curve, the concept and the universal expression of the generalized fatigue constant life curve were proposed.Then, on the basis of the optimization method of the correlation coefficient, the parameter estimation formulas were induced and the generalized fatigue constant life curve with the reliability level p was given.From P-Sa-Sm curve, the two-dimensional probability distribution of the fatigue limit was derived.After then, three set of tests of LY11 CZ corresponding to the different average stress were carried out in terms of the two-dimensional up-down method.Finally, the methods are used to analyze the test results, and it is found that the analyzedresults with the high precision may be obtained.
The HI Probability Distribution Function and the Atomic-to-Molecular Transition in Molecular Clouds
Imara, Nia
2016-01-01
We characterize the column density probability distributions functions (PDFs) of the atomic hydrogen gas, HI, associated with seven Galactic molecular clouds (MCs). We use 21 cm observations from the Leiden/Argentine/Bonn Galactic HI Survey to derive column density maps and PDFs. We find that the peaks of the HI PDFs occur at column densities ranging from ~1-2$\\times 10^{21}$ cm$^2$ (equivalently, ~0.5-1 mag). The PDFs are uniformly narrow, with a mean dispersion of $\\sigma_{HI}\\approx 10^{20}$ cm$^2$ (~0.1 mag). We also investigate the HI-to-H$_2$ transition towards the cloud complexes and estimate HI surface densities ranging from 7-16 $M_\\odot$ pc$^{-2}$ at the transition. We propose that the HI PDF is a fitting tool for identifying the HI-to-H$_2$ transition column in Galactic MCs.
Probability distribution function and multiscaling properties in the Korean stock market
Lee, Kyoung Eun; Lee, Jae Woo
2007-09-01
We consider the probability distribution function (pdf) and the multiscaling properties of the index and the traded volume in the Korean stock market. We observed the power law of the pdf at the fat tail region for the return, volatility, the traded volume, and changes of the traded volume. We also investigate the multifractality in the Korean stock market. We consider the multifractality by the detrended fluctuation analysis (MFDFA). We observed the multiscaling behaviors for index, return, traded volume, and the changes of the traded volume. We apply MFDFA method for the randomly shuffled time series to observe the effects of the autocorrelations. The multifractality is strongly originated from the long time correlations of the time series.
Analysis of Low Probability of Intercept (LPI) Radar Signals Using the Wigner Distribution
Gau, Jen-Yu
2002-09-01
The parameters of Low Probability of Intercept (LPI) radar signals are hard to identity by using traditional periodogram signal processing techniques. Using the Wigner Distribution (WD), this thesis examines eight types of LPI radar signals. Signal to noise ratios of 0 dB and -6 dB are also investigated. The eight types LPI radar signals examined include Frequency Modulation Continuous Wave (FMCW), Frank code, Pt code, P2 code, P3 code, P4 code, COSTAS frequency hopping and Phase Shift Keying/Frequency Shift Keying (PSK/FSK) signals. Binary Phase Shift Keying (BPSK) signals although not used in modern LPI radars are also examined to further illustrate the principal characteristics of the WD.
Binomial moments of the distance distribution and the probability of undetected error
Barg, A. [Lucent Technologies, Murray Hill, NJ (United States). Bell Labs.; Ashikhmin, A. [Los Alamos National Lab., NM (United States)
1998-09-01
In [1] K.A.S. Abdel-Ghaffar derives a lower bound on the probability of undetected error for unrestricted codes. The proof relies implicitly on the binomial moments of the distance distribution of the code. The authors use the fact that these moments count the size of subcodes of the code to give a very simple proof of the bound in [1] by showing that it is essentially equivalent to the Singleton bound. They discuss some combinatorial connections revealed by this proof. They also discuss some improvements of this bound. Finally, they analyze asymptotics. They show that an upper bound on the undetected error exponent that corresponds to the bound of [1] improves known bounds on this function.
Muralisankar, S; Manivannan, A; Balasubramaniam, P
2015-09-01
The aim of this manuscript is to investigate the mean square delay dependent-probability-distribution stability analysis of neutral type stochastic neural networks with time-delays. The time-delays are assumed to be interval time-varying and randomly occurring. Based on the new Lyapunov-Krasovskii functional and stochastic analysis approach, a novel sufficient condition is obtained in the form of linear matrix inequality such that the delayed stochastic neural networks are globally robustly asymptotically stable in the mean-square sense for all admissible uncertainties. Finally, the derived theoretical results are validated through numerical examples in which maximum allowable upper bounds are calculated for different lower bounds of time-delay.
The H I Probability Distribution Function and the Atomic-to-molecular Transition in Molecular Clouds
Imara, Nia; Burkhart, Blakesley
2016-10-01
We characterize the column-density probability distribution functions (PDFs) of the atomic hydrogen gas, H i, associated with seven Galactic molecular clouds (MCs). We use 21 cm observations from the Leiden/Argentine/Bonn Galactic H i Survey to derive column-density maps and PDFs. We find that the peaks of the H i PDFs occur at column densities in the range ˜1-2 × 1021 {{cm}}-2 (equivalently, ˜0.5-1 mag). The PDFs are uniformly narrow, with a mean dispersion of {σ }{{H}{{I}}}≈ {10}20 {{cm}}-2 (˜0.1 mag). We also investigate the H i-to-H2 transition toward the cloud complexes and estimate H i surface densities ranging from 7 to 16 {M}⊙ {{pc}}-2 at the transition. We propose that the H i PDF is a fitting tool for identifying the H i-to-H2 transition column in Galactic MCs.
Random numbers from the tails of probability distributions using the transformation method
Fulger, Daniel; Germano, Guido
2009-01-01
The speed of many one-line transformation methods for the production of, for example, Levy alpha-stable random numbers, which generalize Gaussian ones, and Mittag-Leffler random numbers, which generalize exponential ones, is very high and satisfactory for most purposes. However, for the class of decreasing probability densities fast rejection implementations like the Ziggurat by Marsaglia and Tsang promise a significant speed-up if it is possible to complement them with a method that samples the tails of the infinite support. This requires the fast generation of random numbers greater or smaller than a certain value. We present a method to achieve this, and also to generate random numbers within any arbitrary interval. We demonstrate the method showing the properties of the transform maps of the above mentioned distributions as examples of stable and geometric stable random numbers used for the stochastic solution of the space-time fractional diffusion equation.
Pinto, Pedro C
2010-01-01
We present a mathematical model for communication subject to both network interference and noise. We introduce a framework where the interferers are scattered according to a spatial Poisson process, and are operating asynchronously in a wireless environment subject to path loss, shadowing, and multipath fading. We consider both cases of slow and fast-varying interferer positions. The paper is comprised of two separate parts. In Part I, we determine the distribution of the aggregate network interference at the output of a linear receiver. We characterize the error performance of the link, in terms of average and outage probabilities. The proposed model is valid for any linear modulation scheme (e.g., M-ary phase shift keying or M-ary quadrature amplitude modulation), and captures all the essential physical parameters that affect network interference. Our work generalizes the conventional analysis of communication in the presence of additive white Gaussian noise and fast fading, allowing the traditional results...
Royle, J. Andrew; Chandler, Richard B.; Yackulic, Charles; Nichols, James D.
2012-01-01
1. Understanding the factors affecting species occurrence is a pre-eminent focus of applied ecological research. However, direct information about species occurrence is lacking for many species. Instead, researchers sometimes have to rely on so-called presence-only data (i.e. when no direct information about absences is available), which often results from opportunistic, unstructured sampling. MAXENT is a widely used software program designed to model and map species distribution using presence-only data. 2. We provide a critical review of MAXENT as applied to species distribution modelling and discuss how it can lead to inferential errors. A chief concern is that MAXENT produces a number of poorly defined indices that are not directly related to the actual parameter of interest – the probability of occurrence (ψ). This focus on an index was motivated by the belief that it is not possible to estimate ψ from presence-only data; however, we demonstrate that ψ is identifiable using conventional likelihood methods under the assumptions of random sampling and constant probability of species detection. 3. The model is implemented in a convenient r package which we use to apply the model to simulated data and data from the North American Breeding Bird Survey. We demonstrate that MAXENT produces extreme under-predictions when compared to estimates produced by logistic regression which uses the full (presence/absence) data set. We note that MAXENT predictions are extremely sensitive to specification of the background prevalence, which is not objectively estimated using the MAXENT method. 4. As with MAXENT, formal model-based inference requires a random sample of presence locations. Many presence-only data sets, such as those based on museum records and herbarium collections, may not satisfy this assumption. However, when sampling is random, we believe that inference should be based on formal methods that facilitate inference about interpretable ecological quantities
N. Nishimoto; S. Terae; M. Uesugi; K. Ogasawara; T. Sakurai
2008-01-01
Objectives: The objectives of this study were to investigate the transitional probability distribution of medical term boundaries between characters and to develop a parsing algorithm specifically for medical texts. Methods...
Size effect on strength and lifetime probability distributions of quasibrittle structures
Zdeněk P Bažant; Jia-Liang Le
2012-02-01
Engineering structures such as aircraft, bridges, dams, nuclear containments and ships, as well as computer circuits, chips and MEMS, should be designed for failure probability < $10^{-6}-10^{-7}$ per lifetime. The safety factors required to ensure it are still determined empirically, even though they represent much larger and much more uncertain corrections to deterministic calculations than do the typical errors of modern computer analysis of structures. The empirical approach is sufﬁcient for perfectly brittle and perfectly ductile structures since the cumulative distribution function (cdf) of random strength is known, making it possible to extrapolate to the tail from the mean and variance. However, the empirical approach does not apply to structures consisting of quasibrittle materials, which are brittle materials with inhomogeneities that are not negligible compared to structure size. This paper presents a reﬁned theory on the strength distribution of quasibrittle structures, which is based on the fracture mechanics of nanocracks propagating by activation energy controlled small jumps through the atomic lattice and an analytical model for the multi-scale transition of strength statistics. Based on the power law for creep crack growth rate and the cdf of material strength, the lifetime distribution of quasibrittle structures under constant load is derived. Both the strength and lifetime cdfs are shown to be sizeand geometry-dependent. The theory predicts intricate size effects on both the mean structural strength and lifetime, the latter being much stronger. The theory is shown to match the experimentally observed systematic deviations of strength and lifetime histograms of industrial ceramics from the Weibull distribution.
Khodse, V.B.; Bhosle, N.B.
Amino sugars including bacterial biomarker muramic acid(Mur) were investigated in suspended particulate matter(SPM) to understand their distribution, origin, and biogeochemical cycling and the contribution of bacteria to particulate organic matter...
On the distribution of dark matter in galaxies: Quantum treatments
Argüelles, Carlos R.; Ruffini, Remo; Siutsou, Ivan; Fraga, Bernardo
2014-09-01
The problem of modeling the distribution of dark matter in galaxies in terms of equilibrium configurations of collisionless self-gravitating quantum particles is considered. We first summarize the pioneering model of a Newtonian self-gravitating Fermi gas in thermodynamic equilibrium developed by Ruffini and Stella (1983), which is shown to be a generalization of the King model for fermions. We further review the extension of the former model developed by Gao, Merafina and Ruffini (1990), done for any degree of fermion degeneracy at the center ( θ 0), within general relativity. Finally, we present here for the first time the solutions of the density profiles and rotation curves corresponding to the model of Gao et al. Those solutions have a definite mass M h and a circular velocity v h at the halo radius r h of the configurations, which are typical of spiral galaxies. This treatment allows us to determine a novel core-halo morphology for the dark-matter profiles, as well as a novel bound on the particle mass associated with those profiles.
On the distribution of dark matter in galaxies: quantum treatments
Arguelles, Carlos R.; Ruffini, Remo; Siutsou, Ivan [Sapienza Universita di Roma, Rome (Italy); ICRANet, P.zza della Repubblica, Pescara (Italy); Fraga, Bernardo [Sapienza Universita di Roma, Rome (Italy); Universite de Nice Sophia di Antipolis, Nice (France)
2014-09-15
The problem of modeling the distribution of dark matter in galaxies in terms of equilibrium configurations of collisionless self-gravitating quantum particles is considered. We first summarize the pioneering model of a Newtonian self-gravitating Fermi gas in thermodynamic equilibrium developed by Ruffini and Stella (1983), which is shown to be a generalization of the King model for fermions. We further review the extension of the former model developed by Gao, Merafina and Ruffini (1990), done for any degree of fermion degeneracy at the center (θ{sub 0}), within general relativity. Finally, we present here for the first time the solutions of the density profiles and rotation curves corresponding to the model of Gao et al. Those solutions have a definite mass M{sub h} and a circular velocity v{sub h} at the halo radius r{sub h} of the configurations, which are typical of spiral galaxies. This treatment allows us to determine a novel core-halo morphology for the dark-matter profiles, as well as a novel bound on the particle mass associated with those profiles.
On the distribution of dark matter in galaxies: quantum treatments
Argüelles, Carlos R; Siutsou, Ivan; Fraga, Bernardo
2014-01-01
The problem of modeling the distribution of dark matter in galaxies in terms of equilibrium configurations of collisionless self-gravitating quantum particles is considered. We first summarize the pioneering model of a Newtonian self-gravitating Fermi gas in thermodynamic equilibrium developed by Ruffini and Stella (1983), which is shown to be the generalization of the King model for fermions. We further review the extension of the former model developed by Gao, Merafina and Ruffini (1990), done for any degree of fermion degeneracy at the center ($\\theta_0$), within general relativity. Finally, we present here for the first time the solutions of the density profiles and rotation curves corresponding to the Gao et. al. model, which have a definite mass $M_h$ and circular velocity $v_h$, at the halo radius $r_h$ of the configurations, typical of spiral galaxies. This treatment allow us to determine a novel core-halo morphology for the dark matter profiles, as well as a novel particle mass bound associated with ...
Discretising the velocity distribution for directional dark matter experiments
Kavanagh, Bradley J
2015-01-01
Dark matter (DM) direct detection experiments which are directionally-sensitive may be the only method of probing the full velocity distribution function (VDF) of the Galactic DM halo. We present an angular basis for the DM VDF which can be used to parametrise the distribution in order to mitigate astrophysical uncertainties in future directional experiments and extract information about the DM halo. This basis consists of discretising the VDF in a series of angular bins, with the VDF being only a function of the DM speed $v$ within each bin. In contrast to other methods, such as spherical harmonic expansions, the use of this basis allows us to guarantee that the resulting VDF is everywhere positive and therefore physical. We present a recipe for calculating the event rates corresponding to the discrete VDF for an arbitrary number of angular bins $N$ and investigate the discretisation error which is introduced in this way. For smooth, Standard Halo Model-like distribution functions, only $N=3$ angular bins ar...
Ye, S.; Sleep, B. E.; Chien, C.
2010-12-01
Probability distribution of biofilm thickness and effect of biofilm on permeability of saturated porous media were investigated in a two-dimensional sand-filled cell (55 cm wide x 45 cm high x 1.28 cm thick) under condition of rich nutrition. Inoculation of the lower portion of the cell with a methanogenic culture and addition of methanol to the bottom of the cell led to biomass growth. Biomass distributions in the water and on the sand in the cell were measured by protein analysis. The biofilm distribution on the sand was observed by confocal laser scanning microscopy (CLSM). Permeability was measured by laboratory hydraulic tests. The biomass levels measured in water and on the sand increased with time, and were highest at the bottom of the cell. The biofilm on the sand at the bottom of the cell was thicker. Biomass distribution on the grain of sand was not uniform. Biofilm thickness was a random variable with a normal distribution by statistical analysis of CLSM images. The results of the hydraulic tests demonstrated that the permeability due to biofilm growth was estimated to be average 12% of the initial value. To investigate the spatial distribution of permeability in the two dimensional cell, three models (Taylor, Seki, and Clement) were used to calculate permeability of porous media with biofilm growth. The results of Taylor's model (Taylor et al., 1990) showed reduction in permeability of 2-5 orders magnitude. The Clement's model (Clement et al., 1996) predicted 3%-98% of the initial value. Seki's model (Seki and Miyazaki, 2001) could not be applied in this study. Conclusively, biofilm growth could obviously decrease the permeability of two dimensional saturated porous media, however, the reduction was much less than that estimated in one dimensional condition. Additionally, under condition of two dimensional saturated porous media with rich nutrition, Seki's model could not be applied, Taylor’s model predicted bigger reductions, and the results of
The probability distribution for non-Gaussianity estimators constructed from the CMB trispectrum
Smith, Tristan L
2012-01-01
Considerable recent attention has focussed on the prospects to use the cosmic microwave background (CMB) trispectrum to probe the physics of the early universe. Here we evaluate the probability distribution function (PDF) for the standard estimator tau_nle for the amplitude tau_nl of the CMB trispectrum both for the null-hypothesis (i.e., for Gaussian maps with tau_nl = 0) and for maps with a non-vanishing trispectrum (|tau_nl|>0). We find these PDFs to be highly non-Gaussian in both cases. We also evaluate the variance with which the trispectrum amplitude can be measured, , as a function of its underlying value, tau_nl. We find a strong dependence of this variance on tau_nl. We also find that the variance does not, given the highly non-Gaussian nature of the PDF, effectively characterize the distribution. Detailed knowledge of these PDFs will therefore be imperative in order to properly interpret the implications of any given trispectrum measurement. For example, if a CMB experiment with a maximum multipole ...
A new probability distribution model of turbulent irradiance based on Born perturbation theory
无
2010-01-01
The subject of the PDF (Probability Density Function) of the irradiance fluctuations in a turbulent atmosphere is still unsettled.Theory reliably describes the behavior in the weak turbulence regime,but theoretical description in the strong and whole turbulence regimes are still controversial.Based on Born perturbation theory,the physical manifestations and correlations of three typical PDF models (Rice-Nakagami,exponential-Bessel and negative-exponential distribution) were theoretically analyzed.It is shown that these models can be derived by separately making circular-Gaussian,strong-turbulence and strong-turbulence-circular-Gaussian approximations in Born perturbation theory,which denies the viewpoint that the Rice-Nakagami model is only applicable in the extremely weak turbulence regime and provides theoretical arguments for choosing rational models in practical applications.In addition,a common shortcoming of the three models is that they are all approximations.A new model,called the Maclaurin-spread distribution,is proposed without any approximation except for assuming the correlation coefficient to be zero.So,it is considered that the new model can exactly reflect the Born perturbation theory.Simulated results prove the accuracy of this new model.
Probability distribution functions for ELM bursts in a series of JET tokamak discharges
Greenhough, J [Space and Astrophysics Group, Department of Physics, Warwick University, Coventry CV4 7AL (United Kingdom); Chapman, S C [Space and Astrophysics Group, Department of Physics, Warwick University, Coventry CV4 7AL (United Kingdom); Dendy, R O [Space and Astrophysics Group, Department of Physics, Warwick University, Coventry CV4 7AL (United Kingdom); Ward, D J [EURATOM/UKAEA Fusion Association, Culham Science Centre, Abingdon, Oxfordshire OX14 3DB (United Kingdom)
2003-05-01
A novel statistical treatment of the full raw edge localized mode (ELM) signal from a series of previously studied JET plasmas is tested. The approach involves constructing probability distribution functions (PDFs) for ELM amplitudes and time separations, and quantifying the fit between the measured PDFs and model distributions (Gaussian, inverse exponential) and Poisson processes. Uncertainties inherent in the discreteness of the raw signal require the application of statistically rigorous techniques to distinguish ELM data points from background, and to extrapolate peak amplitudes. The accuracy of PDF construction is further constrained by the relatively small number of ELM bursts (several hundred) in each sample. In consequence the statistical technique is found to be difficult to apply to low frequency (typically Type I) ELMs, so the focus is narrowed to four JET plasmas with high frequency (typically Type III) ELMs. The results suggest that there may be several fundamentally different kinds of Type III ELMing process at work. It is concluded that this novel statistical treatment can be made to work, may have wider applications to ELM data, and has immediate practical value as an additional quantitative discriminant between classes of ELMing behaviour.
Relativistic modeling of compact stars for anisotropic matter distribution
Maurya, S.K. [University of Nizwa, Department of Mathematical and Physical Sciences, College of Arts and Science, Nizwa (Oman)
2017-05-15
In this paper we have solved Einstein's field equations of spherically symmetric spacetime for anisotropic matter distribution by assuming physically valid expressions of the metric function e{sup λ} and radial pressure (p{sub r}). Next we have discussed the physical properties of the model in details by taking the radial pressure p{sub r} equal to zero at the boundary of the star. The physical analysis of the star indicates that its model parameters such as density, redshift, radial pressure, transverse pressure and anisotropy are well behaved. Also we have obtained the mass and radius of our compact star which are 2.29M {sub CircleDot} and 11.02 km, respectively. It is observed that the model obtained here for compact stars is compatible with the mass and radius of the strange star PSR 1937 +21. (orig.)
Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas
2013-01-01
Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.
Burkhart, Blakesley; Lee, Min-Young; Murray, Claire E.; Stanimirović, Snezana
2015-10-01
The shape of the probability distribution function (PDF) of molecular clouds is an important ingredient for modern theories of star formation and turbulence. Recently, several studies have pointed out observational difficulties with constraining the low column density (i.e., {A}V\\lt 1) PDF using dust tracers. In order to constrain the shape and properties of the low column density PDF, we investigate the PDF of multiphase atomic gas in the Perseus molecular cloud using opacity-corrected GALFA-HI data and compare the PDF shape and properties to the total gas PDF and the N(H2) PDF. We find that the shape of the PDF in the atomic medium of Perseus is well described by a lognormal distribution and not by a power-law or bimodal distribution. The peak of the atomic gas PDF in and around Perseus lies at the HI-H2 transition column density for this cloud, past which the N(H2) PDF takes on a power-law form. We find that the PDF of the atomic gas is narrow, and at column densities larger than the HI-H2 transition, the HI rapidly depletes, suggesting that the HI PDF may be used to find the HI-H2 transition column density. We also calculate the sonic Mach number of the atomic gas by using HI absorption line data, which yield a median value of Ms = 4.0 for the CNM, while the HI emission PDF, which traces both the WNM and CNM, has a width more consistent with transonic turbulence.
The Distribution of Dark Matter in the Milky Way's Disk
Kuhlen, Michael; Guedes, Javiera; Madau, Piero
2013-01-01
We present an analysis of the effects of dissipational baryonic physics on the local dark matter (DM) distribution at the location of the Sun, with an emphasis on the consequences for direct detection experiments. We find that two distinct processes lead to a 30% enhancement of DM in the disk plane: the accretion and disruption of satellites resulting in a DM component with net angular momentum and the contraction of baryons pulling DM into the disk plane without forcing it to co-rotate. The co-rotating dark disk in Eris is less massive than what has been suggested by previous work, contributing only 9% of the local DM density. The speed distribution in Eris is broadened and shifted to higher speeds compared to its DM-only twin simulation ErisDark. At high speeds f(v) falls more steeply in Eris than in ErisDark or the Standard Halo Model (SHM), easing the tension between recent results from the CDMS-II and XENON100 experiments. The non-Maxwellian aspects of f(v) are still present, but much less pronounced in ...
Mathematical optimization of matter distribution for a planetary system configuration
Morozov, Yegor; Bukhtoyarov, Mikhail
2016-07-01
Planetary formation is mostly a random process. When the humanity reaches the point when it can transform planetary systems for the purpose of interstellar life expansion, the optimal distribution of matter in a planetary system will determine its population and expansive potential. Maximization of the planetary system carrying capacity and its potential for the interstellar life expansion depends on planetary sizes, orbits, rotation, chemical composition and other vital parameters. The distribution of planetesimals to achieve maximal carrying capacity of the planets during their life cycle, and maximal potential to inhabit other planetary systems must be calculated comprehensively. Moving much material from one planetary system to another is uneconomic because of the high amounts of energy and time required. Terraforming of the particular planets before the whole planetary system is configured might drastically decrease the potential habitability the whole system. Thus a planetary system is the basic unit for calculations to sustain maximal overall population and expand further. The mathematical model of optimization of matter distribution for a planetary system configuration includes the input observed parameters: the map of material orbiting in the planetary system with specified orbits, masses, sizes, and the chemical compound for each, and the optimized output parameters. The optimized output parameters are sizes, masses, the number of planets, their chemical compound, and masses of the satellites required to make tidal forces. Also the magnetic fields and planetary rotations are crucial, but they will be considered in further versions of this model. The optimization criteria is the maximal carrying capacity plus maximal expansive potential of the planetary system. The maximal carrying capacity means the availability of essential life ingredients on the planetary surface, and the maximal expansive potential means availability of uranium and metals to build
Goldstein, Sheldon; Lebowitz, Joel L.; Mastrodonato, Christian; Tumulka, Roderich; Zanghì, Nino
2016-03-01
A quantum system (with Hilbert space {H}1) entangled with its environment (with Hilbert space {H}2) is usually not attributed to a wave function but only to a reduced density matrix {ρ1}. Nevertheless, there is a precise way of attributing to it a random wave function {ψ1}, called its conditional wave function, whose probability distribution {μ1} depends on the entangled wave function {ψ in H1 ⊗ H2} in the Hilbert space of system and environment together. It also depends on a choice of orthonormal basis of H2 but in relevant cases, as we show, not very much. We prove several universality (or typicality) results about {μ1}, e.g., that if the environment is sufficiently large then for every orthonormal basis of H2, most entangled states {ψ} with given reduced density matrix {ρ1} are such that {μ1} is close to one of the so-called GAP (Gaussian adjusted projected) measures, {GAP(ρ1)}. We also show that, for most entangled states {ψ} from a microcanonical subspace (spanned by the eigenvectors of the Hamiltonian with energies in a narrow interval {[E, E+ δ E]}) and most orthonormal bases of H2, {μ1} is close to {GAP({tr}2 ρ_{mc})} with {ρ_{mc}} the normalized projection to the microcanonical subspace. In particular, if the coupling between the system and the environment is weak, then {μ1} is close to {GAP(ρ_β)} with {ρ_β} the canonical density matrix on H1 at inverse temperature {β=β(E)}. This provides the mathematical justification of our claim in Goldstein et al. (J Stat Phys 125: 1193-1221, 2006) that GAP measures describe the thermal equilibrium distribution of the wave function.
Probability Distribution Function of a Forced Passive Tracer in the Lower Stratosphere
无
2007-01-01
The probability distribution function (PDF) of a passive tracer, forced by a "mean gradient", is studied. First, we take two theoretical approaches, the Lagrangian and the conditional closure formalisms, to study the PDFs of such an externally forced passive tracer. Then, we carry out numerical simulations for an idealized random flow on a sphere and for European Center for Medium-Range Weather Forecasts (ECMWF) stratospheric winds to test whether the mean-gradient model can be applied to studying stratospheric tracer mixing in midlatitude surf zones, in which a weak and poleward zonal-mean gradient is maintained by tracer leakage through polar and tropical mixing barriers, and whether the PDFs of tracer fluctuations in midlatitudes are consistent with the theoretical predictions. The numerical simulations show that when diffusive dissipation is balanced by the mean-gradient forcing, the PDF in the random flow and the Southern-Hemisphere PDFs in ECMWF winds show time-invariant exponential tails, consistent with theoretical predictions. In the Northern Hemisphere, the PDFs exhibit non-Gaussian tails. However, the PDF tails are not consistent with theoretical expectations. The long-term behavior of the PDF tails of the forced tracer is compared to that of a decaying tracer. It is found that the PDF tails of the decaying tracer are time-dependent, and evolve toward flatter than exponential.
Schneider, N; Csengeri, T; Klessen, R; Federrath, C; Tremblin, P; Girichidis, P; Bontemps, S; Andre, Ph
2014-01-01
Column density maps of molecular clouds are one of the most important observables in the context of molecular cloud- and star-formation (SF) studies. With Herschel it is now possible to reveal rather precisely the column density of dust, which is the best tracer of the bulk of material in molecular clouds. However, line-of-sight (LOS) contamination from fore- or background clouds can lead to an overestimation of the dust emission of molecular clouds, in particular for distant clouds. This implies too high values for column density and mass, and a misleading interpretation of probability distribution functions (PDFs) of the column density. In this paper, we demonstrate by using observations and simulations how LOS contamination affects the PDF. We apply a first-order approximation (removing a constant level) to the molecular clouds of Auriga and Maddalena (low-mass star-forming), and Carina and NGC3603(both high-mass SF regions). In perfect agreement with the simulations, we find that the PDFs become broader, ...
Turbulence-Induced Relative Velocity of Dust Particles III: The Probability Distribution
Pan, Liubin; Scalo, John
2014-01-01
Motivated by its important role in the collisional growth of dust particles in protoplanetary disks, we investigate the probability distribution function (PDF) of the relative velocity of inertial particles suspended in turbulent flows. Using the simulation from our previous work, we compute the relative velocity PDF as a function of the friction timescales, tau_p1 and tau_p2, of two particles of arbitrary sizes. The friction time of particles included in the simulation ranges from 0.1 tau_eta to 54T_L, with tau_eta and T_L the Kolmogorov time and the Lagrangian correlation time of the flow, respectively. The relative velocity PDF is generically non-Gaussian, exhibiting fat tails. For a fixed value of tau_p1, the PDF is the fattest for equal-size particles (tau_p2~tau_p1), and becomes thinner at both tau_p2tau_p1. Defining f as the friction time ratio of the smaller particle to the larger one, we find that, at a given f in 1/2>T_L). These features are successfully explained by the Pan & Padoan model. Usin...
Exact probability distributions of selected species in stochastic chemical reaction networks.
López-Caamal, Fernando; Marquez-Lago, Tatiana T
2014-09-01
Chemical reactions are discrete, stochastic events. As such, the species' molecular numbers can be described by an associated master equation. However, handling such an equation may become difficult due to the large size of reaction networks. A commonly used approach to forecast the behaviour of reaction networks is to perform computational simulations of such systems and analyse their outcome statistically. This approach, however, might require high computational costs to provide accurate results. In this paper we opt for an analytical approach to obtain the time-dependent solution of the Chemical Master Equation for selected species in a general reaction network. When the reaction networks are composed exclusively of zeroth and first-order reactions, this analytical approach significantly alleviates the computational burden required by simulation-based methods. By building upon these analytical solutions, we analyse a general monomolecular reaction network with an arbitrary number of species to obtain the exact marginal probability distribution for selected species. Additionally, we study two particular topologies of monomolecular reaction networks, namely (i) an unbranched chain of monomolecular reactions with and without synthesis and degradation reactions and (ii) a circular chain of monomolecular reactions. We illustrate our methodology and alternative ways to use it for non-linear systems by analysing a protein autoactivation mechanism. Later, we compare the computational load required for the implementation of our results and a pure computational approach to analyse an unbranched chain of monomolecular reactions. Finally, we study calcium ions gates in the sarco/endoplasmic reticulum mediated by ryanodine receptors.
ANNz2: Photometric Redshift and Probability Distribution Function Estimation using Machine Learning
Sadeh, I.; Abdalla, F. B.; Lahav, O.
2016-10-01
We present ANNz2, a new implementation of the public software for photometric redshift (photo-z) estimation of Collister & Lahav, which now includes generation of full probability distribution functions (PDFs). ANNz2 utilizes multiple machine learning methods, such as artificial neural networks and boosted decision/regression trees. The objective of the algorithm is to optimize the performance of the photo-z estimation, to properly derive the associated uncertainties, and to produce both single-value solutions and PDFs. In addition, estimators are made available, which mitigate possible problems of non-representative or incomplete spectroscopic training samples. ANNz2 has already been used as part of the first weak lensing analysis of the Dark Energy Survey, and is included in the experiment's first public data release. Here we illustrate the functionality of the code using data from the tenth data release of the Sloan Digital Sky Survey and the Baryon Oscillation Spectroscopic Survey. The code is available for download at http://github.com/IftachSadeh/ANNZ.
Gernez, Pierre; Stramski, Dariusz; Darecki, Miroslaw
2011-07-01
Time series measurements of fluctuations in underwater downward irradiance, Ed, within the green spectral band (532 nm) show that the probability distribution of instantaneous irradiance varies greatly as a function of depth within the near-surface ocean under sunny conditions. Because of intense light flashes caused by surface wave focusing, the near-surface probability distributions are highly skewed to the right and are heavy tailed. The coefficients of skewness and excess kurtosis at depths smaller than 1 m can exceed 3 and 20, respectively. We tested several probability models, such as lognormal, Gumbel, Fréchet, log-logistic, and Pareto, which are potentially suited to describe the highly skewed heavy-tailed distributions. We found that the models cannot approximate with consistently good accuracy the high irradiance values within the right tail of the experimental distribution where the probability of these values is less than 10%. This portion of the distribution corresponds approximately to light flashes with Ed > 1.5?, where ? is the time-averaged downward irradiance. However, the remaining part of the probability distribution covering all irradiance values smaller than the 90th percentile can be described with a reasonable accuracy (i.e., within 20%) with a lognormal model for all 86 measurements from the top 10 m of the ocean included in this analysis. As the intensity of irradiance fluctuations decreases with depth, the probability distribution tends toward a function symmetrical around the mean like the normal distribution. For the examined data set, the skewness and excess kurtosis assumed values very close to zero at a depth of about 10 m.
Ren-Jie He; Zhen-Yu Yang
2012-01-01
Differential evolution (DE) has become a very popular and effective global optimization algorithm in the area of evolutionary computation.In spite of many advantages such as conceptual simplicity,high efficiency and ease of use,DE has two main components,i.e.,mutation scheme and parameter control,which significantly influence its performance.In this paper we intend to improve the performance of DE by using carefully considered strategies for both of the two components.We first design an adaptive mutation scheme,which adaptively makes use of the bias of superior individuals when generating new solutions.Although introducing such a bias is not a new idea,existing methods often use heuristic rules to control the bias.They can hardly maintain the appropriate balance between exploration and exploitation during the search process,because the preferred bias is often problem and evolution-stage dependent.Instead of using any fixed rule,a novel strategy is adopted in the new adaptive mutation scheme to adjust the bias dynamically based on the identified local fitness landscape captured by the current population.As for the other component,i.e.,parameter control,we propose a mechanism by using the Lévy probability distribution to adaptively control the scale factor F of DE.For every mutation in each generation,an Fi is produced from one of four different Lévy distributions according to their historical performance.With the adaptive mutation scheme and parameter control using Lévy distribution as the main components,we present a new DE variant called Lévy DE (LDE).Experimental studies were carried out on a broad range of benchmark functions in global numerical optimization.The results show that LDE is very competitive,and both of the two main components have contributed to its overall performance.The scalability of LDE is also discussed by conducting experiments on some selected benchmark functions with dimensions from 30 to 200.
多元Beta分布特性分析%Analysis on Multi-dimensional Beta Probability Distribution Function
潘高田; 梁帆; 郭齐胜; 黄一斌
2011-01-01
Based on the quantitative truncated sequential test theory, multi-dimensional Beta probability distribution functions are come across in the problem of weapons system against aerial target hit accuracy tests. This paper analyses multi-dimensional Beta probability distribution function's properties and figures out" part of two-dimensional Beta probability distribution function values. This research plays an important role in the field of weapon system hit accuracy tests.%利用小样本截尾序贯检验理论,在武器系统对空中目标的命中精度检验问题中,遇到了一类多元Beta概率分布函数,讨论分析了多维Beta概率分布函数的特性并给出了概率计算表.结果对武器精度检验具有重要意义和实用价值.
Ben Issaid, Chaouki
2016-06-01
The Gamma-Gamma distribution has recently emerged in a number of applications ranging from modeling scattering and reverbation in sonar and radar systems to modeling atmospheric turbulence in wireless optical channels. In this respect, assessing the outage probability achieved by some diversity techniques over this kind of channels is of major practical importance. In many circumstances, this is intimately related to the difficult question of analyzing the statistics of a sum of Gamma-Gamma random variables. Answering this question is not a simple matter. This is essentially because outage probabilities encountered in practice are often very small, and hence the use of classical Monte Carlo methods is not a reasonable choice. This lies behind the main motivation of the present work. In particular, this paper proposes a new approach to estimate the left tail of the sum of Gamma-Gamma variates. More specifically, we propose a mean-shift importance sampling scheme that efficiently evaluates the outage probability of L-branch maximum ratio combining diversity receivers over Gamma-Gamma fading channels. The proposed estimator satisfies the well-known bounded relative error criterion, a well-desired property characterizing the robustness of importance sampling schemes, for both identically and non-identically independent distributed cases. We show the accuracy and the efficiency of our approach compared to naive Monte Carlo via some selected numerical simulations.
Jinhua Xu
Full Text Available Visual saliency is the perceptual quality that makes some items in visual scenes stand out from their immediate contexts. Visual saliency plays important roles in natural vision in that saliency can direct eye movements, deploy attention, and facilitate tasks like object detection and scene understanding. A central unsolved issue is: What features should be encoded in the early visual cortex for detecting salient features in natural scenes? To explore this important issue, we propose a hypothesis that visual saliency is based on efficient encoding of the probability distributions (PDs of visual variables in specific contexts in natural scenes, referred to as context-mediated PDs in natural scenes. In this concept, computational units in the model of the early visual system do not act as feature detectors but rather as estimators of the context-mediated PDs of a full range of visual variables in natural scenes, which directly give rise to a measure of visual saliency of any input stimulus. To test this hypothesis, we developed a model of the context-mediated PDs in natural scenes using a modified algorithm for independent component analysis (ICA and derived a measure of visual saliency based on these PDs estimated from a set of natural scenes. We demonstrated that visual saliency based on the context-mediated PDs in natural scenes effectively predicts human gaze in free-viewing of both static and dynamic natural scenes. This study suggests that the computation based on the context-mediated PDs of visual variables in natural scenes may underlie the neural mechanism in the early visual cortex for detecting salient features in natural scenes.
Characterisation of seasonal flood types according to timescales in mixed probability distributions
Fischer, Svenja; Schumann, Andreas; Schulte, Markus
2016-08-01
When flood statistics are based on annual maximum series (AMS), the sample often contains flood peaks, which differ in their genesis. If the ratios among event types change over the range of observations, the extrapolation of a probability distribution function (pdf) can be dominated by a majority of events that belong to a certain flood type. If this type is not typical for extraordinarily large extremes, such an extrapolation of the pdf is misleading. To avoid this breach of the assumption of homogeneity, seasonal models were developed that differ between winter and summer floods. We show that a distinction between summer and winter floods is not always sufficient if seasonal series include events with different geneses. Here, we differentiate floods by their timescales into groups of long and short events. A statistical method for such a distinction of events is presented. To demonstrate their applicability, timescales for winter and summer floods in a German river basin were estimated. It is shown that summer floods can be separated into two main groups, but in our study region, the sample of winter floods consists of at least three different flood types. The pdfs of the two groups of summer floods are combined via a new mixing model. This model considers that information about parallel events that uses their maximum values only is incomplete because some of the realisations are overlaid. A statistical method resulting in an amendment of statistical parameters is proposed. The application in a German case study demonstrates the advantages of the new model, with specific emphasis on flood types.
Schneider, N.; Bontemps, S.; Motte, F.; Ossenkopf, V.; Klessen, R. S.; Simon, R.; Fechtenbaum, S.; Herpin, F.; Tremblin, P.; Csengeri, T.; Myers, P. C.; Hill, T.; Cunningham, M.; Federrath, C.
2016-03-01
The probability distribution function of column density (N-PDF) serves as a powerful tool to characterise the various physical processes that influence the structure of molecular clouds. Studies that use extinction maps or H2 column-density maps (N) that are derived from dust show that star-forming clouds can best be characterised by lognormal PDFs for the lower N range and a power-law tail for higher N, which is commonly attributed to turbulence and self-gravity and/or pressure, respectively. While PDFs from dust cover a large dynamic range (typically N ~ 1020-24 cm-2 or Av~ 0.1-1000), PDFs obtained from molecular lines - converted into H2 column density - potentially trace more selectively different regimes of (column) densities and temperatures. They also enable us to distinguish different clouds along the line of sight through using the velocity information. We report here on PDFs that were obtained from observations of 12CO, 13CO, C18O, CS, and N2H+ in the Cygnus X North region, and make a comparison to a PDF that was derived from dust observations with the Herschel satellite. The PDF of 12CO is lognormal for Av ~ 1-30, but is cut for higher Av because of optical depth effects. The PDFs of C18O and 13CO are mostly lognormal up to Av ~ 1-15, followed by excess up to Av ~ 40. Above that value, all CO PDFs drop, which is most likely due to depletion. The high density tracers CS and N2H+ exhibit only a power law distribution between Av ~ 15 and 400, respectively. The PDF from dust is lognormal for Av ~ 3-15 and has a power-law tail up to Av ~ 500. Absolute values for the molecular line column densities are, however, rather uncertain because of abundance and excitation temperature variations. If we take the dust PDF at face value, we "calibrate" the molecular line PDF of CS to that of the dust and determine an abundance [CS]/[H2] of 10-9. The slopes of the power-law tails of the CS, N2H+, and dust PDFs are -1.6, -1.4, and -2.3, respectively, and are thus consistent
Boots, Nam Kyoo; Shahabuddin, Perwez
2001-01-01
This paper deals with estimating small tail probabilities of thesteady-state waiting time in a GI/GI/1 queue with heavy-tailed (subexponential) service times. The problem of estimating infinite horizon ruin probabilities in insurance risk processes with heavy-tailed claims can be transformed into th
Boots, Nam Kyoo; Shahabuddin, Perwez
2001-01-01
This paper deals with estimating small tail probabilities of thesteady-state waiting time in a GI/GI/1 queue with heavy-tailed (subexponential) service times. The problem of estimating infinite horizon ruin probabilities in insurance risk processes with heavy-tailed claims can be transformed into th
Bel, J.; Branchini, E.; Di Porto, C.; Cucciati, O.; Granett, B. R.; Iovino, A.; de la Torre, S.; Marinoni, C.; Guzzo, L.; Moscardini, L.; Cappi, A.; Abbas, U.; Adami, C.; Arnouts, S.; Bolzonella, M.; Bottini, D.; Coupon, J.; Davidzon, I.; De Lucia, G.; Fritz, A.; Franzetti, P.; Fumana, M.; Garilli, B.; Ilbert, O.; Krywult, J.; Le Brun, V.; Le Fèvre, O.; Maccagni, D.; Małek, K.; Marulli, F.; McCracken, H. J.; Paioro, L.; Polletta, M.; Pollo, A.; Schlagenhaufer, H.; Scodeggio, M.; Tasca, L. A. M.; Tojeiro, R.; Vergani, D.; Zanichelli, A.; Burden, A.; Marchetti, A.; Mellier, Y.; Nichol, R. C.; Peacock, J. A.; Percival, W. J.; Phleps, S.; Wolk, M.
2016-04-01
We compare three methods to measure the count-in-cell probability density function of galaxies in a spectroscopic redshift survey. From this comparison we found that, when the sampling is low (the average number of object per cell is around unity), it is necessary to use a parametric method to model the galaxy distribution. We used a set of mock catalogues of VIPERS to verify if we were able to reconstruct the cell-count probability distribution once the observational strategy is applied. We find that, in the simulated catalogues, the probability distribution of galaxies is better represented by a Gamma expansion than a skewed log-normal distribution. Finally, we correct the cell-count probability distribution function from the angular selection effect of the VIMOS instrument and study the redshift and absolute magnitude dependency of the underlying galaxy density function in VIPERS from redshift 0.5 to 1.1. We found a very weak evolution of the probability density distribution function and that it is well approximated by a Gamma distribution, independently of the chosen tracers. Based on observations collected at the European Southern Observatory, Cerro Paranal, Chile, using the Very Large Telescope under programmes 182.A-0886 and partly 070.A-9007. Also based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT), which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at TERAPIX and the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS. The VIPERS web site is http://www.vipers.inaf.it/
Buchin, K Kevin; Kostitsyna, I Irina; Löffler, M; Silveira, RI
2014-01-01
Let $p$ and $q$ be two imprecise points, given as probability density functions on $\\mathbb R^2$, and let $\\cal R$ be a set of $n$ line segments (obstacles) in $\\mathbb R^2$. We study the problem of approximating the probability that $p$ and $q$ can see each other; that is, that the segment connecting $p$ and $q$ does not cross any segment of $\\cal R$. To solve this problem, we approximate each density function by a weighted set of polygons; a novel approach to dealing with probability densit...
Li, Hanshan; Lei, Zhiyong
2013-01-01
To improve projectile coordinate measurement precision in fire measurement system, this paper introduces the optical fiber coding fire measurement method and principle, sets up their measurement model, and analyzes coordinate errors by using the differential method. To study the projectile coordinate position distribution, using the mathematical statistics hypothesis method to analyze their distributing law, firing dispersion and probability of projectile shooting the object center were put under study. The results show that exponential distribution testing is relatively reasonable to ensure projectile position distribution on the given significance level. Through experimentation and calculation, the optical fiber coding fire measurement method is scientific and feasible, which can gain accurate projectile coordinate position.
Calculation of ruin probabilities for a dense class of heavy tailed distributions
Bladt, Mogens; Nielsen, Bo Friis; Samorodnitsky, Gennady
2015-01-01
In this paper, we propose a class of infinite-dimensional phase-type distributions with finitely many parameters as models for heavy tailed distributions. The class of finite-dimensional phase-type distributions is dense in the class of distributions on the positive reals and may hence approximat...
ZHANG Yi-Xin; CANG Ji
2009-01-01
Effects of atmospheric turbulence tilt, defocus, astigmatism and coma aberrations on the orbital angular mo-mentum measurement probability of photons propagating in weak turbulent regime are modeled with Rytov approximation. By considering the resulting wave as a superposition of angular momentum eigenstates, the or-bital angular momentum measurement probabilities of the transmitted digit axe presented. Our results show that the effect of turbulent tilt aberration on the orbital angular momentum measurement probabilities of photons is the maximum among these four kinds of aberrations. As the aberration order increases, the effects of turbulence aberrations on the measurement probabilities of orbital angular momentum generally decrease, whereas the effect of turbulence defoens can be ignored. For tilt aberration, as the difference between the measured orbital angular momentum and the original orbital angular momentum increases, the orbital angular momentum measurement probabifity decreases.
Examining the Tails of Probability Distributions Created Using Uncertainty Methods: A Case Study
Kang, M.; Thomson, N. R.; Sykes, J. F.
2006-12-01
Environmental management decisions require an understanding of all possible outcomes especially those with a low likelihood of occurrence; however, despite this need emphasis has been placed on the mean rather than extreme outcomes. Typically in groundwater contaminant transport problems, parameter estimates are obtained using automated parameter estimation packages (e.g., PEST) for a given conceptual model. The resulting parameter estimates and covariance information are used to generate Monte Carlo or Latin Hypercube realizations. Our observations indicate that the capacity of the simulations using parameters from the tails of the corresponding probability distributions often fail to sufficiently replicate field based observations. This stems from the fact that the input parameters governing Monte Carlo type uncertainty analysis method are based on the mean. In order to improve the quality of the realizations at the tails, the Dynamically- Dimensioned Search-Uncertainty Analysis (DDS-UA) method is adopted. This approach uses the Dynamically-Dimensioned Search (DDS) algorithm, which is designed to find multiple local minimums, and a pseudo-likelihood function. To test the robustness of this methodology, we applied it to a contaminant transport problem which involved TCE contamination due to releases from the Lockformer Company Facility in Lisle, Illinois. Contamination has been observed in the Silurian dolomite aquifer underlying the facility, which served as a supply of drinking water. Dissolved TCE is assumed to migrate in a predominantly vertically downward direction through the overburden that underlies the Lockformer site and then migrate horizontally in the underlying aquifer. The model is solved using a semi-analytical solution of the mass conservation equation. The parameter estimation process is complicated by the fact that a concentration level equal or greater than the maximum contaminant level must be observed at specified locations. Penalty functions
Williams, Michael S; Ebel, Eric D
2014-11-18
The fitting of statistical distributions to chemical and microbial contamination data is a common application in risk assessment. These distributions are used to make inferences regarding even the most pedestrian of statistics, such as the population mean. The reason for the heavy reliance on a fitted distribution is the presence of left-, right-, and interval-censored observations in the data sets, with censored observations being the result of nondetects in an assay, the use of screening tests, and other practical limitations. Considerable effort has been expended to develop statistical distributions and fitting techniques for a wide variety of applications. Of the various fitting methods, Markov Chain Monte Carlo methods are common. An underlying assumption for many of the proposed Markov Chain Monte Carlo methods is that the data represent independent and identically distributed (iid) observations from an assumed distribution. This condition is satisfied when samples are collected using a simple random sampling design. Unfortunately, samples of food commodities are generally not collected in accordance with a strict probability design. Nevertheless, pseudosystematic sampling efforts (e.g., collection of a sample hourly or weekly) from a single location in the farm-to-table continuum are reasonable approximations of a simple random sample. The assumption that the data represent an iid sample from a single distribution is more difficult to defend if samples are collected at multiple locations in the farm-to-table continuum or risk-based sampling methods are employed to preferentially select samples that are more likely to be contaminated. This paper develops a weighted bootstrap estimation framework that is appropriate for fitting a distribution to microbiological samples that are collected with unequal probabilities of selection. An example based on microbial data, derived by the Most Probable Number technique, demonstrates the method and highlights the
The Distribution of Dark Matter in the Milky Way's Disk
Pillepich, Annalisa; Kuhlen, Michael; Guedes, Javiera; Madau, Piero
2014-04-01
We present an analysis of the effects of dissipational baryonic physics on the local dark matter (DM) distribution at the location of the Sun, with an emphasis on the consequences for direct detection experiments. Our work is based on a comparative analysis of two cosmological simulations with identical initial conditions of a Milky Way halo, one of which (Eris) is a full hydrodynamic simulation and the other (ErisDark) is a DM-only one. We find that in Eris two distinct processes lead to a 30% enhancement of DM in the disk plane at the location of the Sun: the accretion and disruption of satellites resulting in a DM component with net angular momentum, and the contraction of baryons pulling the DM into the disk plane without forcing it to co-rotate. Owing to its particularly quiescent merger history for dark halos of Milky Way mass, the co-rotating dark disk in Eris is less massive than what has been suggested by previous work, contributing only 9% of the local DM density. Yet, since the simulation results in a realistic Milky Way analog galaxy, its DM halo provides a plausible alternative to the Maxwellian standard halo model (SHM) commonly used in direct detection analyses. The speed distribution in Eris is broadened and shifted to higher speeds, compared to its DM-only twin simulation ErisDark. At high speeds f(v) falls more steeply in Eris than in ErisDark or the SHM, easing the tension between recent results from the CDMS-II and XENON100 experiments. The non-Maxwellian aspects of f(v) are still present, but much less pronounced in Eris than in the DM-only runs. The weak dark disk increases the time-averaged scattering rate by only a few percent at low recoil energies. On the high velocity tail, however, the increase in typical speeds due to baryonic contraction results in strongly enhanced mean scattering rates compared to ErisDark, although they are still suppressed compared to the SHM. Similar trends are seen regarding the amplitude of the annual modulation
Hong-fu Guo
2017-01-01
Full Text Available Particle size and distribution play an important role in ignition. The size and distribution of the cyclotetramethylene tetranitramine (HMX particles were investigated by Laser Particle Size Analyzer Malvern MS2000 before experiment and calculation. The mean size of particles is 161 μm. Minimum and maximum sizes are 80 μm and 263 μm, respectively. The distribution function is like a quadratic function. Based on the distribution of micron scale explosive particles, a microscopic model is established to describe the process of ignition of HMX particles under drop weight. Both temperature of contact zones and ignition probability of powder explosive can be predicted. The calculated results show that the temperature of the contact zones between the particles and the drop weight surface increases faster and higher than that of the contact zones between two neighboring particles. For HMX particles, with all other conditions being kept constant, if the drop height is less than 0.1 m, ignition probability will be close to 0. When the drop heights are 0.2 m and 0.3 m, the ignition probability is 0.27 and 0.64, respectively, whereas when the drop height is more than 0.4 m, ignition probability will be close to 0.82. In comparison with experimental results, the two curves are reasonably close to each other, which indicates our model has a certain degree of rationality.
Lee, Moon Ho; Dudin, Alexander; Shaban, Alexy; Pokhrel, Subash Shree; Ma, Wen Ping
Formulae required for accurate approximate calculation of transition probabilities of embedded Markov chain for single-server queues of the GI/M/1, GI/M/1/K, M/G/1, M/G/1/K type with heavy-tail lognormal distribution of inter-arrival or service time are given.
O' Rourke, Patrick Francis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-10-27
The purpose of this report is to provide the reader with an understanding of how a Monte Carlo neutron transport code was written, developed, and evolved to calculate the probability distribution functions (PDFs) and their moments for the neutron number at a final time as well as the cumulative fission number, along with introducing several basic Monte Carlo concepts.
Eriksson, S.H.; Free, S. L.; Thom, M; Symms, M. R.; Martinian, L.; Duncan, J.S.; Sisodiya, S M
2009-01-01
Voxel-based morphometry (VBM) is commonly used to study systematic differences in brain morphology from patients with various disorders, usually by comparisons with control subjects. It has often been suggested, however, that VBM is also sensitive to variations in composition in grey matter. The nature of the grey matter changes identified with VBM is still poorly understood. The aim of the current study was to determine whether grey matter histopathological measurements of neuronal tissue or...
RUNS TEST FOR A CIRCULAR DISTRIBUTION AND A TABLE OF PROBABILITIES
of the well-known Wald - Wolfowitz runs test for a distribution on a straight line. The primary advantage of the proposed test is that it minimizes the number of assumptions on the theoretical distribution.
Yadav, C.; Thomas, R. G.; Mohanty, A. K.; Kapoor, S. S.
2015-07-01
The presence of various fissionlike reactions in heavy-ion induced reactions is a major hurdle in the path to laboratory synthesis of heavy and super-heavy nuclei. It is known that the cross section of forming a heavy evaporation residue in fusion reactions depends on the three factors—the capture cross section, probability of compound nucleus formation PCN, and the survival probability of the compound nucleus against fission. As the probability of compound nucleus formation, PCN is difficult to theoretically estimate because of its complex dependence on several parameters; attempts have been made in the past to deduce it from the fission fragment anisotropy data. In the present work, the fragment anisotropy data for a number of heavy-ion reactions are analyzed and it is found that deduction of PCN from the anisotropy data also requires the knowledge of the ratio of relaxation time of the K degree of freedom to pre-equilibrium fission time.
Clumps and streams in the local dark matter distribution.
Diemand, J; Kuhlen, M; Madau, P; Zemp, M; Moore, B; Potter, D; Stadel, J
2008-08-01
In cold dark matter cosmological models, structures form and grow through the merging of smaller units. Numerical simulations have shown that such merging is incomplete; the inner cores of haloes survive and orbit as 'subhaloes' within their hosts. Here we report a simulation that resolves such substructure even in the very inner regions of the Galactic halo. We find hundreds of very concentrated dark matter clumps surviving near the solar circle, as well as numerous cold streams. The simulation also reveals the fractal nature of dark matter clustering: isolated haloes and subhaloes contain the same relative amount of substructure and both have cusped inner density profiles. The inner mass and phase-space densities of subhaloes match those of recently discovered faint, dark-matter-dominated dwarf satellite galaxies, and the overall amount of substructure can explain the anomalous flux ratios seen in strong gravitational lenses. Subhaloes boost gamma-ray production from dark matter annihilation by factors of 4 to 15 relative to smooth galactic models. Local cosmic ray production is also enhanced, typically by a factor of 1.4 but by a factor of more than 10 in one per cent of locations lying sufficiently close to a large subhalo. (These estimates assume that the gravitational effects of baryons on dark matter substructure are small.).
Issaid, Chaouki ben
2017-01-26
The Gamma-Gamma distribution has recently emerged in a number of applications ranging from modeling scattering and reverberation in sonar and radar systems to modeling atmospheric turbulence in wireless optical channels. In this respect, assessing the outage probability achieved by some diversity techniques over this kind of channels is of major practical importance. In many circumstances, this is related to the difficult question of analyzing the statistics of a sum of Gamma- Gamma random variables. Answering this question is not a simple matter. This is essentially because outage probabilities encountered in practice are often very small, and hence the use of classical Monte Carlo methods is not a reasonable choice. This lies behind the main motivation of the present work. In particular, this paper proposes a new approach to estimate the left tail of the sum of Gamma-Gamma variates. More specifically, we propose robust importance sampling schemes that efficiently evaluates the outage probability of diversity receivers over Gamma-Gamma fading channels. The proposed estimators satisfy the well-known bounded relative error criterion for both maximum ratio combining and equal gain combining cases. We show the accuracy and the efficiency of our approach compared to naive Monte Carlo via some selected numerical simulations.
Mahanti, P.; Robinson, M. S.; Boyd, A. K.
2013-12-01
Craters ~20-km diameter and above significantly shaped the lunar landscape. The statistical nature of the slope distribution on their walls and floors dominate the overall slope distribution statistics for the lunar surface. Slope statistics are inherently useful for characterizing the current topography of the surface, determining accurate photometric and surface scattering properties, and in defining lunar surface trafficability [1-4]. Earlier experimental studies on the statistical nature of lunar surface slopes were restricted either by resolution limits (Apollo era photogrammetric studies) or by model error considerations (photoclinometric and radar scattering studies) where the true nature of slope probability distribution was not discernible at baselines smaller than a kilometer[2,3,5]. Accordingly, historical modeling of lunar surface slopes probability distributions for applications such as in scattering theory development or rover traversability assessment is more general in nature (use of simple statistical models such as the Gaussian distribution[1,2,5,6]). With the advent of high resolution, high precision topographic models of the Moon[7,8], slopes in lunar craters can now be obtained at baselines as low as 6-meters allowing unprecedented multi-scale (multiple baselines) modeling possibilities for slope probability distributions. Topographic analysis (Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) 2-m digital elevation models (DEM)) of ~20-km diameter Copernican lunar craters revealed generally steep slopes on interior walls (30° to 36°, locally exceeding 40°) over 15-meter baselines[9]. In this work, we extend the analysis from a probability distribution modeling point-of-view with NAC DEMs to characterize the slope statistics for the floors and walls for the same ~20-km Copernican lunar craters. The difference in slope standard deviations between the Gaussian approximation and the actual distribution (2-meter sampling) was
Yang, Dixiong; Liu, Zhenjun; Zhou, Jilei
2014-04-01
Chaos optimization algorithms (COAs) usually utilize the chaotic map like Logistic map to generate the pseudo-random numbers mapped as the design variables for global optimization. Many existing researches indicated that COA can more easily escape from the local minima than classical stochastic optimization algorithms. This paper reveals the inherent mechanism of high efficiency and superior performance of COA, from a new perspective of both the probability distribution property and search speed of chaotic sequences generated by different chaotic maps. The statistical property and search speed of chaotic sequences are represented by the probability density function (PDF) and the Lyapunov exponent, respectively. Meanwhile, the computational performances of hybrid chaos-BFGS algorithms based on eight one-dimensional chaotic maps with different PDF and Lyapunov exponents are compared, in which BFGS is a quasi-Newton method for local optimization. Moreover, several multimodal benchmark examples illustrate that, the probability distribution property and search speed of chaotic sequences from different chaotic maps significantly affect the global searching capability and optimization efficiency of COA. To achieve the high efficiency of COA, it is recommended to adopt the appropriate chaotic map generating the desired chaotic sequences with uniform or nearly uniform probability distribution and large Lyapunov exponent.
Fang Zheng
2013-04-01
Full Text Available Analysis of knee joint vibration or vibroarthrographic (VAG signals using signal processing and machine learning algorithms possesses high potential for the noninvasive detection of articular cartilage degeneration, which may reduce unnecessary exploratory surgery. Feature representation of knee joint VAG signals helps characterize the pathological condition of degenerative articular cartilages in the knee. This paper used the kernel-based probability density estimation method to model the distributions of the VAG signals recorded from healthy subjects and patients with knee joint disorders. The estimated densities of the VAG signals showed explicit distributions of the normal and abnormal signal groups, along with the corresponding contours in the bivariate feature space. The signal classifications were performed by using the Fisher’s linear discriminant analysis, support vector machine with polynomial kernels, and the maximal posterior probability decision criterion. The maximal posterior probability decision criterion was able to provide the total classification accuracy of 86.67% and the area (Az of 0.9096 under the receiver operating characteristics curve, which were superior to the results obtained by either the Fisher’s linear discriminant analysis (accuracy: 81.33%, Az: 0.8564 or the support vector machine with polynomial kernels (accuracy: 81.33%, Az: 0.8533. Such results demonstrated the merits of the bivariate feature distribution estimation and the superiority of the maximal posterior probability decision criterion for analysis of knee joint VAG signals.
1998-02-01
html [March 17, 1998]. DeGroot , Morris H., Optimal Statistical Decisions, New York: McGraw-Hill, 1970. Fudenburg, Drew, and Jean Tirole, Game Theory...probabilities of occurrence. The expected utility approach was originally developed by Von Neumann and Morgenstern, and is described, for example, in DeGroot
Asmussen, Søren; Albrecher, Hansjörg
The book gives a comprehensive treatment of the classical and modern ruin probability theory. Some of the topics are Lundberg's inequality, the Cramér-Lundberg approximation, exact solutions, other approximations (e.g., for heavy-tailed claim size distributions), finite horizon ruin probabilities......, extensions of the classical compound Poisson model to allow for reserve-dependent premiums, Markov-modulation, periodicity, change of measure techniques, phase-type distributions as a computational vehicle and the connection to other applied probability areas, like queueing theory. In this substantially...
Mestres-Missé, Anna; Trampel, Robert; Turner, Robert; Kotz, Sonja A
2016-04-01
A key aspect of optimal behavior is the ability to predict what will come next. To achieve this, we must have a fairly good idea of the probability of occurrence of possible outcomes. This is based both on prior knowledge about a particular or similar situation and on immediately relevant new information. One question that arises is: when considering converging prior probability and external evidence, is the most probable outcome selected or does the brain represent degrees of uncertainty, even highly improbable ones? Using functional magnetic resonance imaging, the current study explored these possibilities by contrasting words that differ in their probability of occurrence, namely, unbalanced ambiguous words and unambiguous words. Unbalanced ambiguous words have a strong frequency-based bias towards one meaning, while unambiguous words have only one meaning. The current results reveal larger activation in lateral prefrontal and insular cortices in response to dominant ambiguous compared to unambiguous words even when prior and contextual information biases one interpretation only. These results suggest a probability distribution, whereby all outcomes and their associated probabilities of occurrence--even if very low--are represented and maintained.
Panpan Zhao
2017-05-01
Full Text Available This study investigates the sensitivity and uncertainty of hydrological droughts frequencies and severity in the Weihe Basin, China during 1960–2012, by using six commonly used univariate probability distributions and three Archimedean copulas to fit the marginal and joint distributions of drought characteristics. The Anderson-Darling method is used for testing the goodness-of-fit of the univariate model, and the Akaike information criterion (AIC is applied to select the best distribution and copula functions. The results demonstrate that there is a very strong correlation between drought duration and drought severity in three stations. The drought return period varies depending on the selected marginal distributions and copula functions and, with an increase of the return period, the differences become larger. In addition, the estimated return periods (both co-occurrence and joint from the best-fitted copulas are the closet to those from empirical distribution. Therefore, it is critical to select the appropriate marginal distribution and copula function to model the hydrological drought frequency and severity. The results of this study can not only help drought investigation to select a suitable probability distribution and copulas function, but are also useful for regional water resource management. However, a few limitations remain in this study, such as the assumption of stationary of runoff series.
Dai, Huanping; Micheyl, Christophe
2015-05-01
Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.
Edward Gąsiorek
2014-11-01
Full Text Available The use of different calculating methods to compute the standardized precipitation index (SPI results in various approximations. Methods based on normal distribution and its transformations, as well as on gamma distribution, give similar results and may be used equally, whereas the lognormal distribution fitting method is significantly discrepant, especially for extreme values of SPI. Therefore, it is problematic which method gives the distribution optimally fitted to empirical data. The aim of this study is to categorize the above mentioned methods according to the degree of approximation to empirical data from the Observatory of Agro- and Hydrometeorology in Wrocław-Swojec from 1964–2009 years.
Sébastien Hélie
2007-09-01
Full Text Available This paper presents a graphical way of interpreting effect sizes when more than two groups are involved in a statistical analysis. This method uses noncentral distributions to specify the alternative hypothesis, and the statistical power can thus be directly computed. This principle is illustrated using the chi-squared distribution and the F distribution. Examples of chi-squared and ANOVA statistical tests are provided to further illustrate the point. It is concluded that power analyses are an essential part of statistical analysis, and that using noncentral distributions provides an argument in favour of using a factorial ANOVA over multiple t tests.
The distribution of the dark matter in galaxies as the imprint of its Nature
Frigerio Martins, Christiane
2009-03-01
The standard framework within which cosmological measurements are confronted and interpreted nowadays, called Lambda Cold Dark Matter, presents a Universe dominated by unknown forms of energy and matter. My Thesis is devoted to investigate the distribution of dark matter in galaxies and addresses the fact that the local universe-the small objects that orbit galaxies and the galaxy cores-turns out to be a marvelous laboratory for examining the nature of dark matter and the fundamental physics involved in structure formation and evolution. I develop tests, based on mass modeling of rotation curves, for the validation of dark matter models on galactic scales. These tests have been applied in analyzing the phenomenology of the cusp vs core controversy, and the phenomenon of non-Keplerian rotation curves as modification of the laws of gravity. I further investigate the properties and scaling laws of dark matter halos. My conclusion is that galactic observations provide strong imprints on the nature of dark matter.
The distribution of the dark matter in galaxies as the imprint of its Nature
Martins, Christiane Frigerio
2009-01-01
The standard framework within which cosmological measurements are confronted and interpreted nowadays, called Lambda Cold Dark Matter, presents a Universe dominated by unknown forms of energy and matter. My Thesis is devoted to investigate the distribution of dark matter in galaxies and addresses the fact that the local universe-the small objects that orbit galaxies and the galaxy cores-turns out to be a marvelous laboratory for examining the nature of dark matter and the fundamental physics involved in structure formation and evolution. I develop tests, based on mass modeling of rotation curves, for the validation of dark matter models on galactic scales. These tests have been applied in analyzing the phenomenology of the cusp vs core controversy, and the phenomenon of non-Keplerian rotation curves as modification of the laws of gravity. I further investigate the properties and scaling laws of dark matter halos. My conclusion is that galactic observations provide strong imprints on the nature of dark matter.
Hansen, Thomas Mejer; Mosegaard, Klaus; Cordua, Knud Skou
2010-01-01
Markov chain Monte Carlo methods such as the Gibbs sampler and the Metropolis algorithm can be used to sample the solutions to non-linear inverse problems. In principle these methods allow incorporation of arbitrarily complex a priori information, but current methods allow only relatively simple...... this algorithm with the Metropolis algorithm to obtain an efficient method for sampling posterior probability densities for nonlinear inverse problems....
Goldberg, Samuel
1960-01-01
Excellent basic text covers set theory, probability theory for finite sample spaces, binomial theorem, probability distributions, means, standard deviations, probability function of binomial distribution, more. Includes 360 problems with answers for half.
On the distribution of dark matter in clusters of galaxies
Sand, David J.
2006-07-01
The goal of this thesis is to provide constraints on the dark matter density profile in galaxy clusters by developing and combining different techniques. The work is motivated by the fact that a precise measurement of the logarithmic slope of the dark matter on small scales provides a powerful test of the Cold Dark Matter paradigm for structure formation, where numerical simulations suggest a density profile r DM 0( r -1 or steeper in the innermost regions. We have obtained deep spectroscopy of gravitational arcs and the dominant brightest cluster galaxy in six carefully chosen galaxy clusters. Three of the clusters have both radial and tangential gravitational arcs while the other three display only tangential arcs. We analyze the stellar velocity dispersion for the brightest cluster galaxies in conjunction with axially symmetric lens models to jointly constrain the dark and baryonic mass profiles jointly. For the radial are systems we find the inner dark matter density profile is consistent with r DM 0( r -b , with [left angle bracket]b[right angle bracket] = [Special characters omitted.] (68% CL). Likewise, an upper limit on b for the tangential arc sample is found to be b work, we present a more elaborate two dimensional lens model of the cluster MS2137 using a new ly upgraded gravitational lensing code. (Abstract shortened by UMI.)
Furbish, David J.; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan L.
2016-01-01
We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.
Fractal distributions of dark matter and gas in the MareNostrum Universe
Gaite, Jose
2008-01-01
CONTEXT: Large simulations of dark matter and gas structure formation allow us to separate the pure gravitational dynamics from other processes and, hence, to better compare them with observations. AIMS: Our objective is to analyse the recent MareNostrum simulation from a new perspective, regarding the geometry of the dark matter and gas distributions in it. We intend to find the fractal geometry of the dark matter and to determine if the gas distribution is fractal as well. If it is, the question is whether or not the gas distribution is nonetheless biased with respect to the dark matter. METHODS: We use the methods of multifractal geometry, in particular, an improved method of coarse multifractal analysis based on counts in cells. To detemine the gas biasing, we use statistical methods: we use the cross-correlation coefficient and we develop a Bayesian analysis connected with information theory. We also employ entropic measures to characterize both distributions further than the multifractal analysis. RESUL...
Setty, M.G.A.P.; Rao, Ch.M.
Phosphate, organic matter and calcium carbonate content in five sediment cores (three from the outer shelf one from the slope and one from the basin) from the Arabian Sea have been determined. The distribution pattern indicates their close genetic...
Krishna, M.S.; Naidu, S.A.; Subbaiah, Ch.V.; Sarma, V.V.S.S.; Reddy, N.P.C.
The sources and distribution of organic matter (OM) in surface sediments of the eastern continental margin of India, including the region influenced by river discharge, were investigated using content, molar C:N ratios and stable isotopes of carbon...
Bussy-Virat, C. D.; Ridley, A. J.
2016-10-01
Abrupt transitions from slow to fast solar wind represent a concern for the space weather forecasting community. They may cause geomagnetic storms that can eventually affect systems in orbit and on the ground. Therefore, the probability distribution function (PDF) model was improved to predict enhancements in the solar wind speed. New probability distribution functions allow for the prediction of the peak amplitude and the time to the peak while providing an interval of uncertainty on the prediction. It was found that 60% of the positive predictions were correct, while 91% of the negative predictions were correct, and 20% to 33% of the peaks in the speed were found by the model. This represents a considerable improvement upon the first version of the PDF model. A direct comparison with the Wang-Sheeley-Arge model shows that the PDF model is quite similar, except that it leads to fewer false positive predictions and misses fewer events, especially when the peak reaches very high speeds.
Modeling the distribution of dark matter and its connection to galaxies
Mao, Yao-Yuan
2016-06-01
Despite the mysterious nature of dark matter and dark energy, the Lambda-Cold Dark Matter (LCDM) model provides a reasonably accurate description of the evolution of the cosmos and the distribution of galaxies. Today, we are set to tackle more specific and quantitative questions about the galaxy formation physics, the nature of dark matter, and the connection between the dark and the visible components. The answers to these questions are however elusive, because dark matter is not directly observable, and various unknowns lie between what we can observe and what we can calculate. Hence, mathematical models that bridge the observable and the calculable are essential for the study of modern cosmology. The aim of my thesis work is to improve existing models and also to construct new models for various aspects of the dark matter distribution, as dark matter structures the cosmic web and forms the nests of visible galaxies. Utilizing a series of cosmological dark matter simulations which span a wide dynamical range and a statistical sample of zoom-in simulations which focus on individual dark matter halos, we develop models for the spatial and velocity distribution of dark matter particles, the abundance of dark substructures, and the empirical connection between dark matter and galaxies. As more precise observational results become available, more accurate models are then required to test the consistency between these results and the LCDM predictions. For all the models we investigate, we find that the formation history of dark matter halos always plays a crucial role. Neglecting the halo formation history would result in systematic biases when we interpret various observational results, including dark matter direct detection experiments, the detection of dark substructures with strong-lensed systems, the large-scale spatial clustering of galaxies, and the abundance of dwarf galaxies. Rectifying this, our work will enable us to fully utilize the complementary power of
Abdelrahman, Mahmoud A. E.; Sohaly, M. A.
2017-08-01
This work deals with the construction of the exact traveling wave solutions for the nonlinear Schrödinger equation by the new Riccati-Bernoulli Sub-ODE method. Additionally, we apply this method in order to study the random solutions by finding the probability distribution function when the coefficient in our problem is a random variable. The travelling wave solutions of many equations physically or mathematically are expressed by hyperbolic functions, trigonometric functions and rational functions. We discuss our method in the deterministic case and also in a random case, by studying the beta distribution for the random input.
Elizalde, E.; Gaztanaga, E.
1992-01-01
The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.
Radożycki, Tomasz
2016-11-01
The probability density distributions for the ground states of certain model systems in quantum mechanics and for their classical counterparts are considered. It is shown, that classical distributions are remarkably improved by incorporating into them the Heisenberg uncertainty relation between position and momentum. Even the crude form of this incorporation makes the agreement between classical and quantum distributions unexpectedly good, except for the small area, where classical momenta are large. It is demonstrated that the slight improvement of this form, makes the classical distribution very similar to the quantum one in the whole space. The obtained results are much better than those from the WKB method. The paper is devoted to ground states, but the method applies to excited states too.
A Probability Distribution of Surface Elevation for Wind Waves in Terms of the Gram-Charlier Series
黄传江; 戴德君; 王伟; 钱成春
2003-01-01
Laboratory experiments are conducted to study the probability distribution of surface elevation for wind waves and the convergence is discussed of the Gram-Charlier series in describing the surface elevation distribution. Results show that the agreement between the Gram-Charlier series and the observed distribution becomes better and better as the truncated order of the series increases in a certain range, which is contrary to the phenomenon observed by Huang and Long (1980). It is also shown that the Gram-Charlier series is sensitive to the anomalies in the data set which will make the agreement worse if they are not preprocessed appropriately. Negative values of the probability distribution expressed by the Gram-Charlier series in some ranges of surface elevations are discussed, but the absolute values of the negative values as well as the ranges of their occurrence become smaller gradually as more and more terms are included. Therefore the negative values will have no evident effect on the form of the whole surface elevation distribution when the series is truncated at higher orders. Furthermore, a simple recurrence formula is obtained to calculate the coefficients of the Gram-Charlier series in order to extend the Gram-Charlier series to high orders conveniently.
Pradeep K. Goyal
2011-09-01
Full Text Available This paper presents a study conducted on the probabilistic distribution of key cyclone parameters and the cyclonic wind speed by analyzing the cyclone track records obtained from India meteorological department for east coast region of India. The dataset of historical landfalling storm tracks in India from 1975–2007 with latitude /longitude and landfall locations are used to map the cyclone tracks in a region of study. The statistical tests were performed to find a best fit distribution to the track data for each cyclone parameter. These parameters include central pressure difference, the radius of maximum wind speed, the translation velocity, track angle with site and are used to generate digital simulated cyclones using wind field simulation techniques. For this, different sets of values for all the cyclone key parameters are generated randomly from their probability distributions. Using these simulated values of the cyclone key parameters, the distribution of wind velocity at a particular site is obtained. The same distribution of wind velocity at the site is also obtained from actual track records and using the distributions of the cyclone key parameters as published in the literature. The simulated distribution is compared with the wind speed distributions obtained from actual track records. The findings are useful in cyclone disaster mitigation.
Research on the behavior of fiber orientation probability distribution function in the planar flows
ZHOU Kun; LIN Jian-zhong
2005-01-01
The equation of two-dimensional fiber direction vector was solved theoretically to give the fiber orientation distribution in simple shear flow, flow with two direction shears, extensional flow and arbitrary planar incompressible flow. The Fokker-Planck equation was solved numerically to validify the theoretical solutions. The stable orientation and orientation period of fiber were obtained. The results showed that the fiber orientation distribution is dependent on the relative not absolute magnitude of the matrix rate-of-strain of flow. The effect of fiber aspect ratio on the orientation distribution of fiber is insignificant in most conditions except the simple shear case. It was proved that the results for a planar flow could be generalized to the case of 3-D fiber direction vector.
Binder, Harald
2014-07-01
This is a discussion of the following papers: "Probability estimation with machine learning methods for dichotomous and multicategory outcome: Theory" by Jochen Kruppa, Yufeng Liu, Gérard Biau, Michael Kohler, Inke R. König, James D. Malley, and Andreas Ziegler; and "Probability estimation with machine learning methods for dichotomous and multicategory outcome: Applications" by Jochen Kruppa, Yufeng Liu, Hans-Christian Diener, Theresa Holste, Christian Weimar, Inke R. König, and Andreas Ziegler.
Multiparameter probability distributions for heavy rainfall modeling in extreme southern Brazil
Samuel Beskow
2015-09-01
New hydrological insights for the region: The Anderson–Darling and Filliben tests were the most restrictive in this study. Based on the Anderson–Darling test, it was found that the Kappa distribution presented the best performance, followed by the GEV. This finding provides evidence that these multiparameter distributions result, for the region of study, in greater accuracy for the generation of intensity–duration–frequency curves and the prediction of peak streamflows and design hydrographs. As a result, this finding can support the design of hydraulic structures and flood management in river basins.
Intragroup dark matter distribution in small galaxy group-like systems in a LCDM cosmology
Aceves, Hector; Altamirano-Devora, L; Ramon-Fox, F G; Cañas, R
2013-01-01
In this paper we study the distribution of dark matter in small galaxy groups, in a LCDM cosmology, identified using a physical criterion. We quantify the amount of intra-group dark matter and characterize its distribution. We find that compact associations of galaxies, as well as those intermediate and loose groups, have a rather flat profiles with a logarithmic slope of gamma =-0.2. Hence, the intra-group dark matter does not follow the same cuspy tendency that haloes of galaxies have. In intermediate and loose galaxy associations the intragroup matter tends to be <50% that of the total mass of the group, and in compact associations is <20% within their group radius. So, in general, common dark matter haloes of small galaxy groups are not cuspy nor massive.
Is extrapair mating random? On the probability distribution of extrapair young in avian broods
Brommer, Jon E.; Korsten, Peter; Bouwman, Karen A.; Berg, Mathew L.; Komdeur, Jan
2007-01-01
A dichotomy in female extrapair copulation (EPC) behavior, with some females seeking EPC and others not, is inferred if the observed distribution of extrapair young (EPY) over broods differs from a random process on the level of individual offspring (binomial, hypergeometrical, or Poisson). A review
Ruin Probabilities and Aggregrate Claims Distributions for Shot Noise Cox Processes
Albrecher, H.; Asmussen, Søren
We consider a risk process Rt where the claim arrival process is a superposition of a homogeneous Poisson process and a Cox process with a Poisson shot noise intensity process, capturing the effect of sudden increases of the claim intensity due to external events. The distribution of the aggregate...
Ground impact probability distribution for small unmanned aircraft in ballistic descent
La Cour-Harbo, Anders
2017-01-01
Safety is a key factor in all aviation, and while years of development has made manned aviation relatively safe, the same has yet to happen for unmanned aircraft. However, the rapid development of unmanned aircraft technology means that the range of commercial and scientific applications is growing...... equally rapid. At the same time the trend in national and international regulations for unmanned aircraft is to take a risk-based approach, effectively requiring risk assessment for every flight operation. This work addresses the growing need for methods for quantitatively evaluating individual flights...... by modelling the consequences of a ballistic descent of an unmanned aircraft as a result of a major inflight incident. The presented model is a probability density function for the ground impact area based on a second order drag model with probabilistic assumptions on the least well-known parameters...
Nemeth, Noel
2013-01-01
Models that predict the failure probability of monolithic glass and ceramic components under multiaxial loading have been developed by authors such as Batdorf, Evans, and Matsuo. These "unit-sphere" failure models assume that the strength-controlling flaws are randomly oriented, noninteracting planar microcracks of specified geometry but of variable size. This report develops a formulation to describe the probability density distribution of the orientation of critical strength-controlling flaws that results from an applied load. This distribution is a function of the multiaxial stress state, the shear sensitivity of the flaws, the Weibull modulus, and the strength anisotropy. Examples are provided showing the predicted response on the unit sphere for various stress states for isotropic and transversely isotropic (anisotropic) materials--including the most probable orientation of critical flaws for offset uniaxial loads with strength anisotropy. The author anticipates that this information could be used to determine anisotropic stiffness degradation or anisotropic damage evolution for individual brittle (or quasi-brittle) composite material constituents within finite element or micromechanics-based software
A near-infrared SETI experiment: probability distribution of false coincidences
Maire, Jérôme; Wright, Shelley A.; Werthimer, Dan; Treffers, Richard R.; Marcy, Geoffrey W.; Stone, Remington P. S.; Drake, Frank; Siemion, Andrew
2014-07-01
A Search for Extraterrestrial Life (SETI), based on the possibility of interstellar communication via laser signals, is being designed to extend the search into the near-infrared spectral region (Wright et al, this conference). The dedicated near-infrared (900 to 1700 nm) instrument takes advantage of a new generation of avalanche photodiodes (APD), based on internal discrete amplification. These discrete APD (DAPD) detectors have a high speed response (laser light pulse detection in our experiment. These criteria are defined to optimize the trade between high detection efficiency and low false positive coincident signals, which can be produced by detector dark noise, background light, cosmic rays, and astronomical sources. We investigate experimentally how false coincidence rates depend on the number of detectors in parallel, and on the signal pulse height and width. We also look into the corresponding threshold to each of the signals to optimize the sensitivity while also reducing the false coincidence rates. Lastly, we discuss the analytical solution used to predict the probability of laser pulse detection with multiple detectors.
Lahmiri, Salim
2016-03-01
Hybridisation of the bi-dimensional empirical mode decomposition (BEMD) with denoising techniques has been proposed in the literature as an effective approach for image denoising. In this Letter, the Student's probability density function is introduced in the computation of the mean envelope of the data during the BEMD sifting process to make it robust to values that are far from the mean. The resulting BEMD is denoted tBEMD. In order to show the effectiveness of the tBEMD, several image denoising techniques in tBEMD domain are employed; namely, fourth order partial differential equation (PDE), linear complex diffusion process (LCDP), non-linear complex diffusion process (NLCDP), and the discrete wavelet transform (DWT). Two biomedical images and a standard digital image were considered for experiments. The original images were corrupted with additive Gaussian noise with three different levels. Based on peak-signal-to-noise ratio, the experimental results show that PDE, LCDP, NLCDP, and DWT all perform better in the tBEMD than in the classical BEMD domain. It is also found that tBEMD is faster than classical BEMD when the noise level is low. When it is high, the computational cost in terms of processing time is similar. The effectiveness of the presented approach makes it promising for clinical applications.
Jian, Jhih-Wei; Elumalai, Pavadai; Pitti, Thejkiran; Wu, Chih Yuan; Tsai, Keng-Chang; Chang, Jeng-Yih; Peng, Hung-Pin; Yang, An-Suei
2016-01-01
Predicting ligand binding sites (LBSs) on protein structures, which are obtained either from experimental or computational methods, is a useful first step in functional annotation or structure-based drug design for the protein structures. In this work, the structure-based machine learning algorithm ISMBLab-LIG was developed to predict LBSs on protein surfaces with input attributes derived from the three-dimensional probability density maps of interacting atoms, which were reconstructed on the query protein surfaces and were relatively insensitive to local conformational variations of the tentative ligand binding sites. The prediction accuracy of the ISMBLab-LIG predictors is comparable to that of the best LBS predictors benchmarked on several well-established testing datasets. More importantly, the ISMBLab-LIG algorithm has substantial tolerance to the prediction uncertainties of computationally derived protein structure models. As such, the method is particularly useful for predicting LBSs not only on experimental protein structures without known LBS templates in the database but also on computationally predicted model protein structures with structural uncertainties in the tentative ligand binding sites. PMID:27513851
Perrotta, A
2002-01-01
A MC method is proposed to compute upper limits, in a pure Bayesian approach, when the errors associated with the experimental sensitivity and expected background content are not Gaussian distributed or not small enough to apply usual approximations. It is relatively easy to extend the procedure to the multichannel case (for instance when different decay branching, luminosities or experiments have to be combined). Some of the searches for supersymmetric particles performed in the DELPHI experiment at the LEP electron- positron collider use such a procedure to propagate systematics into the calculation of cross-section upper limits. One of these searches is described as an example. (6 refs).
2007-01-01
Full Text Available To produce probability distributions for regional climate change in surface temperature and precipitation, a probability distribution for global mean temperature increase has been combined with the probability distributions for the appropriate scaling variables, i.e. the changes in regional temperature/precipitation per degree global mean warming. Each scaling variable is assumed to be normally distributed. The uncertainty of the scaling relationship arises from systematic differences between the regional changes from global and regional climate model simulations and from natural variability. The contributions of these sources of uncertainty to the total variance of the scaling variable are estimated from simulated temperature and precipitation data in a suite of regional climate model experiments conducted within the framework of the EU-funded project PRUDENCE, using an Analysis Of Variance (ANOVA. For the area covered in the 2001–2004 EU-funded project SWURVE, five case study regions (CSRs are considered: NW England, the Rhine basin, Iberia, Jura lakes (Switzerland and Mauvoisin dam (Switzerland. The resulting regional climate changes for 2070–2099 vary quite significantly between CSRs, between seasons and between meteorological variables. For all CSRs, the expected warming in summer is higher than that expected for the other seasons. This summer warming is accompanied by a large decrease in precipitation. The uncertainty of the scaling ratios for temperature and precipitation is relatively large in summer because of the differences between regional climate models. Differences between the spatial climate-change patterns of global climate model simulations make significant contributions to the uncertainty of the scaling ratio for temperature. However, no meaningful contribution could be found for the scaling ratio for precipitation due to the small number of global climate models in the PRUDENCE project and natural variability, which is
Denis Cousineau
2008-03-01
Full Text Available This article discusses how to characterize response time (RT frequency distributions in terms of probability functions and how to implement the necessary analysis tools using MATLAB. The first part of the paper discusses the general principles of maximum likelihood estimation. A detailed implementation that allows fitting the popular ex-Gaussian function is then presented followed by the results of a Monte Carlo study that shows the validity of the proposed approach. Although the main focus is the ex-Gaussian function, the general procedure described here can be used to estimate best fitting parameters of various probability functions. The proposed computational tools, written in MATLAB source code, are available through the Internet.
Codon information value and codon transition-probability distributions in short-term evolution
Jiménez-Montaño, M. A.; Coronel-Brizio, H. F.; Hernández-Montoya, A. R.; Ramos-Fernández, A.
2016-07-01
To understand the way the Genetic Code and the physical-chemical properties of coded amino acids affect accepted amino acid substitutions in short-term protein evolution, taking into account only overall amino acid conservation, we consider an underlying codon-level model. This model employs codon pair-substitution frequencies from an empirical matrix in the literature, modified for single-base mutations only. Ordering the degenerated codons according to their codon information value (Volkenstein, 1979), we found that three-fold and most of four-fold degenerated codons, which have low codon values, were best fitted to rank-frequency distributions with constant failure rate (exponentials). In contrast, almost all two-fold degenerated codons, which have high codon values, were best fitted to rank-frequency distributions with variable failure rate (inverse power-laws). Six-fold degenerated codons are considered to be doubly assigned. The exceptional behavior of some codons, including non-degenerate codons, is discussed.
Julianne de Castro Oliveira
2012-06-01
Full Text Available The objective of this study was to evaluate the effectiveness of fatigue life, Frechet, Gamma, Generalized Gamma, Generalized Logistic, Log-logistic, Nakagami, Beta, Burr, Dagum, Weibull and Hyperbolic distributions in describing diameter distribution in teak stands subjected to thinning at different ages. Data used in this study originated from 238 rectangular permanent plots 490 m2 in size, installed in stands of Tectona grandis L. f. in Mato Grosso state, Brazil. The plots were measured at ages 34, 43, 55, 68, 81, 82, 92, 104, 105, 120, 134 and 145 months on average. Thinning was done in two occasions: the first was systematic at age 81months, with a basal area intensity of 36%, while the second was selective at age 104 months on average and removed poorer trees, reducing basal area by 30%. Fittings were assessed by the Kolmogorov-Smirnov goodness-of-fit test. The Log-logistic (3P, Burr (3P, Hyperbolic (3P, Burr (4P, Weibull (3P, Hyperbolic (2P, Fatigue Life (3P and Nakagami functions provided more satisfactory values for the k-s test than the more commonly used Weibull function.
Probability distribution of the free energy of a directed polymer in a random medium
Brunet, Éric; Derrida, Bernard
2000-06-01
We calculate exactly the first cumulants of the free energy of a directed polymer in a random medium for the geometry of a cylinder. By using the fact that the nth moment of the partition function is given by the ground-state energy of a quantum problem of n interacting particles on a ring of length L, we write an integral equation allowing to expand these moments in powers of the strength of the disorder γ or in powers of n. For n small and n~(Lγ)-1/2, the moments take a scaling form which allows us to describe all the fluctuations of order 1/L of the free energy per unit length of the directed polymer. The distribution of these fluctuations is the same as the one found recently in the asymmetric exclusion process, indicating that it is characteristic of all the systems described by the Kardar-Parisi-Zhang equation in 1+1 dimensions.
Perrotta, Andrea
A MC method is proposed to compute upper limits, in a pure Bayesian approach, when the errors associated to the experimental sensitivity and to the expected background content are not Gaussian distributed or not small enough to apply the usual approximations. It is relatively easy to extend the procedure to the multichannel case (for instance when different decay branchings, or luminosities or experiments have to be combined). Some of the searches for supersymmetric particles performed in the DELPHI experiment at the LEP electron-positron collider use such a procedure to propagate the systematics into the calculation of the cross-section upper limits. One of these searches will be described as an example.
Gyenis, Balázs
2017-02-01
We investigate Maxwell's attempt to justify the mathematical assumptions behind his 1860 Proposition IV according to which the velocity components of colliding particles follow the normal distribution. Contrary to the commonly held view we find that his molecular collision model plays a crucial role in reaching this conclusion, and that his model assumptions also permit inference to equalization of mean kinetic energies (temperatures), which is what he intended to prove in his discredited and widely ignored Proposition VI. If we take a charitable reading of his own proof of Proposition VI then it was Maxwell, and not Boltzmann, who gave the first proof of a tendency towards equilibrium, a sort of H-theorem. We also call attention to a potential conflation of notions of probabilistic and value independence in relevant prior works of his contemporaries and of his own, and argue that this conflation might have impacted his adoption of the suspect independence assumption of Proposition IV.
Vertical distribution of suspended particulate matter in the Caspian Sea in early summer
Kravchishina, M. D.; Lein, A. Yu.; Pautova, L. A.; Klyuvitkin, A. A.; Politova, N. V.; Novigatsky, A. N.; Silkin, V. A.
2016-11-01
The features of the vertical distribution of chlorophyll a, particulate organic carbon and its isotopic composition, total suspended particulate matter (SPM), and the structure of the phytoplankton community in the Middle and South Caspian Sea in May-June 2012 are discussed. The subsurface chlorophyll a maximum (SCM) was found everywhere at depths of 20 to 40-60 m. The position of this layer is confined to the depth of the seasonal thermocline, which is determined by the development of a cold-water (dark) phytocenosis. The genesis of this layer was studied. The increase in chlorophyll a concentration in this layer is caused by an abundance of phytoplankton or an increased concentration of this phytopigments per algal cell. The highest values of the studied organic compounds and phytoplankton biomass are revealed as close to the seasonal thermocline extending from the southern periphery of the Derbent Depression to the Apsheron Sill, which is determined by the bottom topography. The presence of chlorophyll a at depths exceeding 300 m (up to ≥1 mg/m3) was revealed. This was supported by findings of individual algal cells containing chlorophyll a and even their accumulations in the deep water layer. The most probable mechanisms responsible for the presence of these cells at the deep water level are discussed in the paper. The vertical distribution of the values of the organic carbon isotopic composition is primarily controlled by the vertical structure of phytoplankton and chlorophyll a in the water column up to 500 m and by biogeochemical processes at the redox barrier ( 600 m layer). The relative stability of chlorophyll a and the stability of pheophytin a in anaerobic environments were verified. A significant amount of weakly transformed chlorophyll a was found close the sea bottom.
QIAN Shang-Wu; GU Zhi-Yu
2001-01-01
Using the Feynman's path integral with topological constraints arising from the presence of one singular line, we find the homotopic probability distribution PnL for the winding number n and the partition function PL of the entangled system around a ribbon segment chain. We find that when the width of the ribbon segment chain 2a increases,the partition function exponentially decreases, whereas the free energy increases an amount, which is proportional to the square of the width. When the width tends to zero we obtain the same results as those of a single chain with one singular point.
Jacobsen, J L; Saleur, H
2008-02-29
We determine exactly the probability distribution of the number N_(c) of valence bonds connecting a subsystem of length L>1 to the rest of the system in the ground state of the XXX antiferromagnetic spin chain. This provides, in particular, the asymptotic behavior of the valence-bond entanglement entropy S_(VB)=N_(c)ln2=4ln2/pi(2)lnL disproving a recent conjecture that this should be related with the von Neumann entropy, and thus equal to 1/3lnL. Our results generalize to the Q-state Potts model.
Beniaminov, E M
2001-01-01
There are considered some corollaries of certain hypotheses on the observation process of microphenomena. We show that an enlargement of the phase space and of its motion group and an account for the diffusion motions of microsystems in the enlarged space, the motions which act by small random translations along the enlarged group, lead to observable quantum effects. This approach enables one to recover probability distributions in the phase space for wave functions. The parameters of the model considered here are estimated on the base of Lamb's shift in the spectrum of the hydrogen's atom.
Study of the nuclear matter distribution in neutron-rich Li isotopes
Dobrovolsky, A.V. [Petersburg Nuclear Physics Institute (PNPI), 188300 Gatchina (Russian Federation)]. E-mail: dobrov@pnpi.spb.ru; Alkhazov, G.D. [Petersburg Nuclear Physics Institute (PNPI), 188300 Gatchina (Russian Federation); Andronenko, M.N. [Petersburg Nuclear Physics Institute (PNPI), 188300 Gatchina (Russian Federation); Bauchet, A. [Gesellschaft fuer Schwerionenforschung (GSI), 64291 Darmstadt (Germany); Egelhof, P. [Gesellschaft fuer Schwerionenforschung (GSI), 64291 Darmstadt (Germany); Fritz, S. [Gesellschaft fuer Schwerionenforschung (GSI), 64291 Darmstadt (Germany); Geissel, H. [Gesellschaft fuer Schwerionenforschung (GSI), 64291 Darmstadt (Germany); Gross, C. [Gesellschaft fuer Schwerionenforschung (GSI), 64291 Darmstadt (Germany); Khanzadeev, A.V. [Petersburg Nuclear Physics Institute (PNPI), 188300 Gatchina (Russian Federation); Korolev, G.A. [Petersburg Nuclear Physics Institute (PNPI), 188300 Gatchina (Russian Federation); Kraus, G. [Gesellschaft fuer Schwerionenforschung (GSI), 64291 Darmstadt (Germany); Lobodenko, A.A. [Petersburg Nuclear Physics Institute (PNPI), 188300 Gatchina (Russian Federation); Muenzenberg, G. [Gesellschaft fuer Schwerionenforschung (GSI), 64291 Darmstadt (Germany); Mutterer, M. [Institut fuer Kernphysik (IKP), TU-Darmstadt, 64289 Darmstadt (Germany); Neumaier, S.R. [Gesellschaft fuer Schwerionenforschung (GSI), 64291 Darmstadt (Germany); Institut fuer Kernphysik (IKP), TU-Darmstadt, 64289 Darmstadt (Germany); Schaefer, T. [Gesellschaft fuer Schwerionenforschung (GSI), 64291 Darmstadt (Germany); Scheidenberger, C. [Gesellschaft fuer Schwerionenforschung (GSI), 64291 Darmstadt (Germany); Seliverstov, D.M. [Petersburg Nuclear Physics Institute (PNPI), 188300 Gatchina (Russian Federation); Timofeev, N.A. [Petersburg Nuclear Physics Institute (PNPI), 188300 Gatchina (Russian Federation); Vorobyov, A.A.; Yatsoura, V.I. [Petersburg Nuclear Physics Institute (PNPI), 188300 Gatchina (Russian Federation)
2006-02-20
The differential cross sections for small-angle proton elastic scattering on the {sup 6,8,9,11}Li nuclei at energies near 700 MeV/nucleon were measured in inverse kinematics using secondary nuclear beams at GSI Darmstadt. The hydrogen-filled ionization chamber IKAR was employed as target and recoil proton detector. For determining the nuclear matter radii and radial matter distributions, the measured cross sections have been analysed with the aid of the Glauber multiple-scattering theory. The nuclear matter distribution deduced for {sup 11}Li exhibits a very pronounced halo structure, the matter radius of {sup 11}Li being significantly larger than those of the {sup 6,8,9}Li isotopes. The data on {sup 8,9}Li are consistent with the existence of sizable neutron skins in these nuclei. The obtained data allow for a test of various theoretical model calculations of the structure of the studied neutron-rich nuclei.
Nonuniversal power law scaling in the probability distribution of scientific citations
Peterson, G J; Dill, K A; 10.1073/pnas.1010757107
2010-01-01
We develop a model for the distribution of scientific citations. The model involves a dual mechanism: in the direct mechanism, the author of a new paper finds an old paper A and cites it. In the indirect mechanism, the author of a new paper finds an old paper A only via the reference list of a newer intermediary paper B, which has previously cited A. By comparison to citation databases, we find that papers having few citations are cited mainly by the direct mechanism. Papers already having many citations (`classics') are cited mainly by the indirect mechanism. The indirect mechanism gives a power-law tail. The `tipping point' at which a paper becomes a classic is about 25 citations for papers published in the Institute for Scientific Information (ISI) Web of Science database in 1981, 31 for Physical Review D papers published from 1975-1994, and 37 for all publications from a list of high h-index chemists assembled in 2007. The power-law exponent is not universal. Individuals who are highly cited have a system...
Timonina, Anna; Hochrainer-Stigler, Stefan; Pflug, Georg; Jongman, Brenden; Rojas, Rodrigo
2015-11-01
Losses due to natural hazard events can be extraordinarily high and difficult to cope with. Therefore, there is considerable interest to estimate the potential impact of current and future extreme events at all scales in as much detail as possible. As hazards typically spread over wider areas, risk assessment must take into account interrelations between regions. Neglecting such interdependencies can lead to a severe underestimation of potential losses, especially for extreme events. This underestimation of extreme risk can lead to the failure of riskmanagement strategies when they are most needed, namely, in times of unprecedented events. In this article, we suggest a methodology to incorporate such interdependencies in risk via the use of copulas. We demonstrate that by coupling losses, dependencies can be incorporated in risk analysis, avoiding the underestimation of risk. Based on maximum discharge data of river basins and stream networks, we present and discuss different ways to couple loss distributions of basins while explicitly incorporating tail dependencies. We distinguish between coupling methods that require river structure data for the analysis and those that do not. For the later approach we propose a minimax algorithm to choose coupled basin pairs so that the underestimation of risk is avoided and the use of river structure data is not needed. The proposed methodology is especially useful for large-scale analysis and we motivate and apply our method using the case of Romania. The approach can be easily extended to other countries and natural hazards.
Schneider, N; Motte, F; Ossenkopf, V; Klessen, R S; Simon, R; Fechtenbaum, S; Herpin, F; Tremblin, P; Csengeri, T; Myers, P C; Hill, T; Cunningham, M; Federrath, C
2015-01-01
Column density (N) PDFs serve as a powerful tool to characterize the physical processes that influence the structure of molecular clouds. Star-forming clouds can best be characterized by lognormal PDFs for the lower N range and a power-law tail for higher N, commonly attributed to turbulence and self-gravity and/or pressure, respectively. We report here on PDFs obtained from observations of 12CO, 13CO, C18O, CS, and N2H+ in the Cygnus X North region and compare to a PDF derived from dust observations with the Herschel satellite. The PDF of 12CO is lognormal for Av~1-30, but is cut for higher Av due to optical depth effects. The PDFs of C18O and 13CO are mostly lognormal up for Av~1-15, followed by excess up to Av~40. Above that value, all CO PDFs drop, most likely due to depletion. The high density tracers CS and N2H+ exhibit only a power law distribution between Av~15 and 400, respectively. The PDF from dust is lognormal for Av~2-15 and has a power-law tail up to Av~500. Absolute values for the molecular lin...
Nonuniversal power law scaling in the probability distribution of scientific citations.
Peterson, George J; Pressé, Steve; Dill, Ken A
2010-09-14
We develop a model for the distribution of scientific citations. The model involves a dual mechanism: in the direct mechanism, the author of a new paper finds an old paper A and cites it. In the indirect mechanism, the author of a new paper finds an old paper A only via the reference list of a newer intermediary paper B, which has previously cited A. By comparison to citation databases, we find that papers having few citations are cited mainly by the direct mechanism. Papers already having many citations ("classics") are cited mainly by the indirect mechanism. The indirect mechanism gives a power-law tail. The "tipping point" at which a paper becomes a classic is about 25 citations for papers published in the Institute for Scientific Information (ISI) Web of Science database in 1981, 31 for Physical Review D papers published from 1975-1994, and 37 for all publications from a list of high h-index chemists assembled in 2007. The power-law exponent is not universal. Individuals who are highly cited have a systematically smaller exponent than individuals who are less cited.
Computation of steady-state probability distributions in stochastic models of cellular networks.
Mark Hallen
2011-10-01
Full Text Available Cellular processes are "noisy". In each cell, concentrations of molecules are subject to random fluctuations due to the small numbers of these molecules and to environmental perturbations. While noise varies with time, it is often measured at steady state, for example by flow cytometry. When interrogating aspects of a cellular network by such steady-state measurements of network components, a key need is to develop efficient methods to simulate and compute these distributions. We describe innovations in stochastic modeling coupled with approaches to this computational challenge: first, an approach to modeling intrinsic noise via solution of the chemical master equation, and second, a convolution technique to account for contributions of extrinsic noise. We show how these techniques can be combined in a streamlined procedure for evaluation of different sources of variability in a biochemical network. Evaluation and illustrations are given in analysis of two well-characterized synthetic gene circuits, as well as a signaling network underlying the mammalian cell cycle entry.
Asgarani, Somayeh
2015-02-01
A method of finding entropic form for a given stationary probability distribution and specified potential field is discussed, using the steady-state Fokker-Planck equation. As examples, starting with the Boltzmann and Tsallis distribution and knowing the force field, we obtain the Boltzmann-Gibbs and Tsallis entropies. Also, the associated entropy for the gamma probability distribution is found, which seems to be in the form of the gamma function. Moreover, the related Fokker-Planck equations are given for the Boltzmann, Tsallis, and gamma probability distributions.
Nuclear-matter distributions of halo nuclei from elastic proton scattering in inverse kinematics
Egelhof, P.; Bauchet, A.; Fritz, S.; Geissel, H.; Gross, C.; Kraus, G.; Muenzenberg, G.; Neumaier, S.R.; Schaefer, T.; Scheidenberger, C. [Gesellschaft fuer Schwerionenforschung (GSI), D-64291 Darmstadt (Germany); Alkhazov, G.D.; Andronenko, M.N.; Gavrilov, G.E.; Khanzadeev, A.V.; Korolev, G.A.; Lobodenko, A.A.; Seliverstov, D.M.; Timofeev, N.A. [Petersburg Nuclear Physics Institute (PNPI), RU-188300 Gatchina (Russian Federation); Dobrovolsky, A.V. [Gesellschaft fuer Schwerionenforschung (GSI), D-64291 Darmstadt (Germany); Petersburg Nuclear Physics Institute (PNPI), RU-188300 Gatchina (Russian Federation); Mutterer, M. [Institut fuer Kernphysik (IKP), Technische Universitaet, D-64289 Darmstadt (Germany); Vorobyov, A.A.; Yatsoura, V.I.
2002-10-01
Proton-nucleus elastic scattering at intermediate energies, a well-established method for probing nuclear-matter density distributions of stable nuclei, was applied for the first time to exotic nuclei. This method is demonstrated to be an effective means for obtaining accurate and detailed information on the size and radial shape of halo nuclei. Absolute differential cross-sections for small-angle scattering were measured at energies near 700 MeV/u for the neutron-rich helium isotopes {sup 6}He and {sup 8}He, and more recently for the lithium isotopes {sup 6}Li, {sup 8}Li, {sup 9}Li and {sup 11}Li, using He and Li beams provided by the fragment separator FRS at GSI Darmstadt. Experiments were performed in inverse kinematics using the hydrogen-filled ionization chamber IKAR which served simultaneously as target and recoil-proton detector. For deducing nuclear-matter distributions, differential cross-sections calculated with the aid of the Glauber multiple-scattering theory, using various parametrizations for the nucleon density distributions as input, were fitted to the experimental cross-sections. The results on nuclear-matter radii and matter distributions are presented, and the significance of the data for a halo structure is discussed. Nuclear-matter distributions obtained for {sup 6}He and {sup 8}He conform with the concept that both nuclei compose of {alpha}-particle like cores and significant neutron halos. The matter distribution in {sup 11}Li exhibits, as expected from previous reaction cross-section studies with nuclear targets, the by far most extended halo component of all nuclei being investigated. In addition the present data allow a quantitative comparison of the structure of the He and Li isobars of either the mass number A=6 or A=8. The measured differential cross-sections have also been used for probing density distributions as predicted from various microscopic calculations. A few examples are presented. (orig.)
Lu, Dan; Zhang, Guannan; Webster, Clayton; Barbier, Charlotte
2016-12-01
In this work, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challenge in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.
Ying, L H
2012-01-01
Nonlinear instability and refraction by ocean currents are both important mechanisms that go beyond the Rayleigh approximation and may be responsible for the formation of freak waves. In this paper, we quantitatively study nonlinear effects on the evolution of surface gravity waves on the ocean, to explore systematically the effects of various input parameters on the probability of freak wave formation. The fourth-order current-modified nonlinear Schr\\"odinger equation (CNLS4) is employed to describe the wave evolution. By solving CNLS4 numerically, we are able to obtain quantitative predictions for the wave height distribution as a function of key environmental conditions such as average steepness, angular spread, and frequency spread of the local sea state. Additionally, we explore the spatial dependence of the wave height distribution, associated with the buildup of nonlinear development.
Influence of baryons on the spatial distribution of matter: higher order correlation functions
Xiao-Jun Zhu; Jun Pan
2012-01-01
Physical processes involving baryons could leave a non-negligible imprint on the distribution of cosmic matter.A series of simulated data sets at high resolution with identical initial conditions are employed for count-in-cell analysis,including one N-body pure dark matter run,one with only adiabatic gas and one with dissipative processes.Variances and higher order cumulants Sn of dark matter and gas are estimated.It is found that physical processes with baryons mainly affect distributions of dark matter at scales less than 1 h-1 Mpc.In comparison with the pure dark matter run,adiabatic processes alone strengthen the variance of dark matter by ～ 10％ at a scale of 0.1 h-1 Mpc,while the Sn parameters of dark matter only mildly deviate by a few percent.The dissipative gas run does not differ much from the adiabatic run in terms of variance for dark matter,but renders significantly different Sn parameters describing the dark matter,bringing about a more than 10％ enhancement to S3 at 0.1 h-1 Mpc and z ＝ 0 and being even larger at a higher redshift.Distribution patterns of gas in two hydrodynamical simulations are quite different.Variance of gas at z ＝ 0 decreases by ～ 30％ in the adiabatic simulation but by ～ 60％ in the nonadiabatic simulation at 0.1 h-1 Mpc.The attenuation is weaker at larger scales but is still obvious at ～ 10 h-1 Mpc.Sn parameters of gas are biased upward at scales ＜～ 4 h-1 Mpc,and dissipative processes show an ～ 84％ promotion at z ＝ 0 to S3 at 0.1 h-1 Mpc in contrast with the ～ 7％ change in the adiabatic run.The segregation in clustering between gas and dark matter could have dramatic implications on modeling distributions of galaxies and relevant cosmological applications demanding fine details of matter distribution in a strongly nonlinear regime.
Drakos, Nicole E; Wahl, Lindi M
2015-12-01
Theoretical approaches are essential to our understanding of the complex dynamics of mobile genetic elements (MGEs) within genomes. Recently, the birth-death-diversification model was developed to describe the dynamics of mobile promoters (MPs), a particular class of MGEs in prokaryotes. A unique feature of this model is that genetic diversification of elements was included. To explore the implications of diversification on the longterm fate of MGE lineages, in this contribution we analyze the extinction probabilities, extinction times and equilibrium solutions of the birth-death-diversification model. We find that diversification increases both the survival and growth rate of MGE families, but the strength of this effect depends on the rate of horizontal gene transfer (HGT). We also find that the distribution of MGE families per genome is not necessarily monotonically decreasing, as observed for MPs, but may have a peak in the distribution that is related to the HGT rate. For MPs specifically, we find that new families have a high extinction probability, and predict that the number of MPs is increasing, albeit at a very slow rate. Additionally, we develop an extension of the birth-death-diversification model which allows MGEs in different regions of the genome, for example coding and non-coding, to be described by different rates. This extension may offer a potential explanation as to why the majority of MPs are located in non-promoter regions of the genome.
Lisitzin, A. P.; Klyuvitkin, A. A.; Burenkov, V. I.; Kravchishina, M. D.; Politova, N. V.; Novigatsky, A. N.; Shevchenko, V. P.; Klyuvitkina, T. S.
2016-01-01
The main purpose of this work is to study the real distribution and spatial-temporal variations of suspended particulate matter and its main components in surface waters of the Atlantic Ocean on the basis of direct and satellite measurements for development of new and perfection of available algorithms for converting satellite data. The distribution fields of suspended particulate matter were calculated and plotted for the entire Atlantic Ocean. It is established that its distribution in the open ocean is subordinate to the latitudinal climatic zonality. The areas with maximum concentrations form latitudinal belts corresponding to high-productivity eutrophic and mesotrophic waters of the northern and southern temperate humid belts and with the equatorial humid zone. Phytoplankton, the productivity of which depends primarily on the climatic zonality, is the main producer of suspended particulate matter in the surface water layer.
S Varadhan, S R
2001-01-01
This volume presents topics in probability theory covered during a first-year graduate course given at the Courant Institute of Mathematical Sciences. The necessary background material in measure theory is developed, including the standard topics, such as extension theorem, construction of measures, integration, product spaces, Radon-Nikodym theorem, and conditional expectation. In the first part of the book, characteristic functions are introduced, followed by the study of weak convergence of probability distributions. Then both the weak and strong limit theorems for sums of independent rando
Shi, Qiong-bin; Zhao, Xiu-lan; Chang, Tong-ju; Lu, Ji-wen
2016-05-15
A long-term experiment was utilized to study the effects of tillage methods on the contents and distribution characteristics of organic matter and heavy metals (Cu, Zn, Pb, Cd, Fe and Mn) in aggregates with different sizes (including 1-2, 0.25-1, 0.05-0.25 mm and tillage methods including flooded paddy field (FPF) and paddy-upland rotation (PR). The relationship between heavy metals and organic matter in soil aggregates was also analyzed. The results showed that the aggregates of two tillage methods were dominated by 0.05-0.25 mm and tillage methods did not significantly affect the contents of heavy metals in soils, but FPF could enhance the accumulation and distribution of aggregate, organic matter and heavy metals in aggregates with diameters of 1-2 mm and 0.25-1 mm. Correlation analysis found that there was a negative correlation between the contents of heavy metals and organic matter in soil aggregates, but a positive correlation between the amounts of heavy metal and organic matter accumulated in soil aggregates. From the slope of the correlation analysis equations, we could found that the sensitivities of heavy metals to the changes of soil organic matters followed the order of Mn > Zn > Pb > Cu > Fe > Cd under the same tillage. When it came to the same heavy metal, it was more sensitive in PR than in FPF.
Lum, Daniel J; Knarr, Samuel H; Howell, John C
2015-10-19
We demonstrate how to efficiently implement extremely high-dimensional compressive imaging of a bi-photon probability distribution. Our method uses fast-Hadamard-transform Kronecker-based compressive sensing to acquire the joint space distribution. We list, in detail, the operations necessary to enable fast-transform-based matrix-vector operations in the joint space to reconstruct a 16.8 million-dimensional image in less than 10 minutes. Within a subspace of that image exists a 3.2 million-dimensional bi-photon probability distribution. In addition, we demonstrate how the marginal distributions can aid in the accuracy of joint space distribution reconstructions.
Baryonic Distributions in the Dark Matter Halo of NGC5005
Richards, Emily E; Barnes, K L; Staudaher, S; Dale, D A; Braun, T T; Wavle, D C; Calzetti, D; Dalcanton, J J; Bullock, J S; Chandar, R
2015-01-01
We present results from multiwavelength observations of the galaxy NGC5005. We use new neutral hydrogen (HI) observations from the Very Large Array to examine the neutral gas morphology and kinematics. We find an HI disk with a well-behaved flat rotation curve in the radial range 20\\arcsec-140\\arcsec. Ionized gas observations from the SparsePak integral field unit on the WIYN 3.5m telescope provide kinematics for the central 70\\arcsec. We use both the SparsePak and HI velocity fields to derive a rotation curve for NGC5005. Deep 3.6{\\mu}m observations from the Spitzer Space Telescope probe the faint extended stellar population of NGC5005. The images reveal a large stellar disk with a high surface brightness component that transitions to a low surface brightness component at a radius nearly 1.6 times farther than the extent of the gas disk detected in HI. The 3.6{\\mu}m image is also decomposed into bulge and disk components to account for the stellar light distribution. Optical broadband B and R and narrowband ...
Calculation of momentum distribution function of a non-thermal fermionic dark matter
Biswas, Anirban; Gupta, Aritra
2017-03-01
The most widely studied scenario in dark matter phenomenology is the thermal WIMP scenario. Inspite of numerous efforts to detect WIMP, till now we have no direct evidence for it. A possible explanation for this non-observation of dark matter could be because of its very feeble interaction strength and hence, failing to thermalise with the rest of the cosmic soup. In other words, the dark matter might be of non-thermal origin where the relic density is obtained by the so-called freeze-in mechanism. Furthermore, if this non-thermal dark matter is itself produced substantially from the decay of another non-thermal mother particle, then their distribution functions may differ in both size and shape from the usual equilibrium distribution function. In this work, we have studied such a non-thermal (fermionic) dark matter scenario in the light of a new type of U(1)B‑L model. The U(1)B‑L model is interesting, since, besides being anomaly free, it can give rise to neutrino mass by Type II see-saw mechanism. Moreover, as we will show, it can accommodate a non-thermal fermionic dark matter as well. Starting from the collision terms, we have calculated the momentum distribution function for the dark matter by solving a coupled system of Boltzmann equations. We then used it to calculate the final relic abundance, as well as other relevant physical quantities. We have also compared our result with that obtained from solving the usual Boltzmann (or rate) equations directly in terms of comoving number density, Y. Our findings suggest that the latter approximation is valid only in cases where the system under study is close to equilibrium, and hence should be used with caution.
Magnetization curves and probability angular distribution of the magnetization vector in Er2Fe14Si3
Sobh, Hala A.; Aly, Samy H.; Shabara, Reham M.; Yehia, Sherif
2016-01-01
Specific magnetic and magneto-thermal properties of Er2Fe14Si3, in the temperature range of 80-300 K, have been investigated using basic laws of classical statistical mechanics in a simple model. In this model, the constructed partition function was used to derive, and therefore calculate the temperature and/or field dependence of a host of physical properties. Examples of these properties are: the magnetization, magnetic heat capacity, magnetic susceptibility, probability angular distribution of the magnetization vector, and the associated angular dependence of energy. We highlight a correlation between the energy of the system, its magnetization behavior and the angular location of the magnetization vector. Our results show that Er2Fe14Si3 is an easy-axis system in the temperature range 80-114 K, but switches to an easy-plane system at T≥114 K. This transition is also supported by both of the temperature dependence of the magnetic heat capacity, which develops a peak at a temperature ~114 K, and the probability landscape which shows, in zero magnetic field, a prominent peak in the basal plane at T=113.5 K.
Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu
2013-01-04
Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .
Falandysz, Jerzy; Zhang, Ji; Wang, Yuanzhong; Krasińska, Grażyna; Kojta, Anna; Saba, Martyna; Shen, Tao; Li, Tao; Liu, Honggao
2015-12-15
This study focused on investigation of the accumulation and distribution of mercury (Hg) in mushrooms of the genus Leccinum that emerged on soils of totally different geochemical bedrock composition. Hg in 6 species from geographically diverse regions of the mercuriferous belt areas in Yunnan of SW China, and 8 species from the non-mercuriferous regions of Poland in Europe was measured. Also assessed was the probable dietary intake of Hg from consumption of Leccinum spp., which are traditional organic food items in SW China and Poland. The results showed that L. chromapes, L. extremiorientale, L. griseum and L. rugosicepes are good accumulators of Hg and the sequestered Hg in caps were up to 4.8, 3.5, 3.6 and 4.7 mg Hg kg(-1) dry matter respectively. Leccinum mushrooms from Poland also efficiently accumulated Hg with their average Hg content being an order of magnitude lower due to low concentrations of Hg in forest topsoil of Poland compared to the elevated contents in Yunnan. Consumption of Leccinum mushrooms with elevated Hg contents in Yunnan at rates of up to 300 g fresh product per week during the foraging season would not result in Hg intake that exceeds the provisional weekly tolerance limit of 0.004 mg kg(-1) body mass, assuming no Hg ingestion from other foods.
Porto, Markus; Roman, H Eduardo
2002-04-01
We consider autoregressive conditional heteroskedasticity (ARCH) processes in which the variance sigma(2)(y) depends linearly on the absolute value of the random variable y as sigma(2)(y) = a+b absolute value of y. While for the standard model, where sigma(2)(y) = a + b y(2), the corresponding probability distribution function (PDF) P(y) decays as a power law for absolute value of y-->infinity, in the linear case it decays exponentially as P(y) approximately exp(-alpha absolute value of y), with alpha = 2/b. We extend these results to the more general case sigma(2)(y) = a+b absolute value of y(q), with 0 history of the ARCH process is taken into account, the resulting PDF becomes a stretched exponential even for q = 1, with a stretched exponent beta = 2/3, in a much better agreement with the empirical data.
HAN Li-Bo; GONG Xiao-Long; CAO Li; WU Da-Jin
2007-01-01
An approximate Fokker-P1anck equation for the logistic growth model which is driven by coloured correlated noises is derived by applying the Novikov theorem and the Fox approximation. The steady-state probability distribution (SPD) and the mean of the tumour cell number are analysed. It is found that the SPD is the single extremum configuration when the degree of correlation between the multiplicative and additive noises, λ, is in -1＜λ ≤ 0 and can be the double extrema in 0＜λ＜1. A configuration transition occurs because of the variation of noise parameters. A minimum appears in the curve of the mean of the steady-state tumour cell number, 〈x〉, versus λ. The position and the value of the minimum are controlled by the noise-correlated times.
Han, Seok-Jung; Yang, Joon-Eon; Lee, Won-Jea [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2008-05-15
A safety issue of a Very High Temperature Reactor (VHTR) is to estimate the Reliability of a Passive safety System (RoPS). The Stress-Strength Interference (SSI) approach is widely adopted to estimate the RoPS. Major efforts for the RoPS addressed a quantification of the operational uncertainty of a passive safety system given a postulated accident scenario. However, another important problem is to determine the failure criteria of a passive safety system, because there is an ambiguity in the failure criteria for a VHTR due to the inherent safety characteristics. This paper focuses on an investigation of the reliability characteristics due to a change of the probability distribution in a failure criterion for the quantification of the RoPS.
Chung-Ho Su
2010-12-01
Full Text Available To forecast a complex and non-linear system, such as a stock market, advanced artificial intelligence algorithms, like neural networks (NNs and genetic algorithms (GAs have been proposed as new approaches. However, for the average stock investor, two major disadvantages are argued against these advanced algorithms: (1 the rules generated by NNs and GAs are difficult to apply in investment decisions; and (2 the time complexity of the algorithms to produce forecasting outcomes is very high. Therefore, to provide understandable rules for investors and to reduce the time complexity of forecasting algorithms, this paper proposes a novel model for the forecasting process, which combines two granulating methods (the minimize entropy principle approach and the cumulative probability distribution approach and a rough set algorithm. The model verification demonstrates that the proposed model surpasses the three listed conventional fuzzy time-series models and a multiple regression model (MLR in forecast accuracy.
Silva, A. Christian; Yakovenko, Victor M.
2003-06-01
We compare the probability distribution of returns for the three major stock-market indexes (Nasdaq, S&P500, and Dow-Jones) with an analytical formula recently derived by Drăgulescu and Yakovenko for the Heston model with stochastic variance. For the period of 1982-1999, we find a very good agreement between the theory and the data for a wide range of time lags from 1 to 250 days. On the other hand, deviations start to appear when the data for 2000-2002 are included. We interpret this as a statistical evidence of the major change in the market from a positive growth rate in 1980s and 1990s to a negative rate in 2000s.
Cai, H.; Wang, M.; Elgowainy, A.; Han, J. (Energy Systems)
2012-07-06
Greenhouse gas (CO{sub 2}, CH{sub 4} and N{sub 2}O, hereinafter GHG) and criteria air pollutant (CO, NO{sub x}, VOC, PM{sub 10}, PM{sub 2.5} and SO{sub x}, hereinafter CAP) emission factors for various types of power plants burning various fuels with different technologies are important upstream parameters for estimating life-cycle emissions associated with alternative vehicle/fuel systems in the transportation sector, especially electric vehicles. The emission factors are typically expressed in grams of GHG or CAP per kWh of electricity generated by a specific power generation technology. This document describes our approach for updating and expanding GHG and CAP emission factors in the GREET (Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation) model developed at Argonne National Laboratory (see Wang 1999 and the GREET website at http://greet.es.anl.gov/main) for various power generation technologies. These GHG and CAP emissions are used to estimate the impact of electricity use by stationary and transportation applications on their fuel-cycle emissions. The electricity generation mixes and the fuel shares attributable to various combustion technologies at the national, regional and state levels are also updated in this document. The energy conversion efficiencies of electric generating units (EGUs) by fuel type and combustion technology are calculated on the basis of the lower heating values of each fuel, to be consistent with the basis used in GREET for transportation fuels. On the basis of the updated GHG and CAP emission factors and energy efficiencies of EGUs, the probability distribution functions (PDFs), which are functions that describe the relative likelihood for the emission factors and energy efficiencies as random variables to take on a given value by the integral of their own probability distributions, are updated using best-fit statistical curves to characterize the uncertainties associated with GHG and CAP emissions in life
Cai, H.; Wang, M.; Elgowainy, A.; Han, J. (Energy Systems)
2012-07-06
Greenhouse gas (CO{sub 2}, CH{sub 4} and N{sub 2}O, hereinafter GHG) and criteria air pollutant (CO, NO{sub x}, VOC, PM{sub 10}, PM{sub 2.5} and SO{sub x}, hereinafter CAP) emission factors for various types of power plants burning various fuels with different technologies are important upstream parameters for estimating life-cycle emissions associated with alternative vehicle/fuel systems in the transportation sector, especially electric vehicles. The emission factors are typically expressed in grams of GHG or CAP per kWh of electricity generated by a specific power generation technology. This document describes our approach for updating and expanding GHG and CAP emission factors in the GREET (Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation) model developed at Argonne National Laboratory (see Wang 1999 and the GREET website at http://greet.es.anl.gov/main) for various power generation technologies. These GHG and CAP emissions are used to estimate the impact of electricity use by stationary and transportation applications on their fuel-cycle emissions. The electricity generation mixes and the fuel shares attributable to various combustion technologies at the national, regional and state levels are also updated in this document. The energy conversion efficiencies of electric generating units (EGUs) by fuel type and combustion technology are calculated on the basis of the lower heating values of each fuel, to be consistent with the basis used in GREET for transportation fuels. On the basis of the updated GHG and CAP emission factors and energy efficiencies of EGUs, the probability distribution functions (PDFs), which are functions that describe the relative likelihood for the emission factors and energy efficiencies as random variables to take on a given value by the integral of their own probability distributions, are updated using best-fit statistical curves to characterize the uncertainties associated with GHG and CAP emissions in life
Soil organic matter distribution as influenced by enchytraeid and earthworm activity
Koutika, L.S.; Didden, W.A.M.; Marinissen, J.C.Y.
2001-01-01
Loam and sandy soils, and the earthworm casts produced with 14C-labelled plant material in both soils, were incubated in airtight glass vessels with and without enchytraeids to evaluate the effects of soil fauna on the distribution and fragmentation of organic matter. After 1, 3, and 6 weeks, the
von Schmid, M.; Bagchi, S.; Bonig, S.; Csatlos, M.; Dillmann, I.; Dimopoulou, C.; Egelhof, P.; Eremin, V.; Furuno, T.; Geissel, H.; Gernhaeuser, R.; Harakeh, M. N.; Hartig, A-L; Ilieva, S.; Kalantar-Nayestanaki, N.; Kiselev, O.; Kollmus, H.; Kozhuharov, C.; Krasznahorkay, A.; Kroell, T.; Kuilman, M.; Litvinov, S.; Litvinov, Yu A.; Mahjour-Shafiei, M.; Mutterer, M.; Nagae, D.; Najafi, M. A.; Nociforo, C.; Nolden, F.; Popp, U.; Rigollet, C.; Roy, S.; Scheidenberger, C.; Steck, M.; Streicher, B.; Stuhl, L.; Thuerauf, M.; Uesaka, T.; Weick, H.; Winfield, J. S.; Winters, D.; Woods, P. J.; Yamaguchi, T.; Yue, K.; Zamora, J. C.; Zenihiro, J.
2015-01-01
We have measured the nuclear-matter distribution of the doubly-magic N = Z nucleus Ni-56 by investigating elastic proton scattering in inverse kinematics. The radioactive beam of Ni-56 was injected and stored in the experimental storage ring (ESR, GSI) and interacted with an internal hydrogen gas-je
Bai, Junhong; Deng, Wei; Zhang, Yuxia; Wang, Guoping
2002-03-01
The spatial distribution characteristics of soil organic matter and nitrogen in the natural floodplain wetland were studied in this paper. The results showed that the vertical distributions of nutrients in floodplain wetland were very similar, and the horizontal distribution of them in surface soil were distinctly different. The highest concentration of nutrients was not in the frequently-flooded floodplain wetland where the concentrations of soil organic matter and total nitrogen were 2.36% and 2605.4 mg/kg, but in the floodplain with a certain flood frequency. The concentrations of soil organic matter in the one-year floodplain wetland and five-year floodplain wetland were 3.70% and 3.92%, respectively; and the concentrations of total nitrogen were 3666.4 mg/kg and 3125.6 mg/kg, respectively. The ratios of carbon and nitrogen were relatively low. All the factors such as cycling of dry and wet, underground underset, vegetation growth and pH values etc. influenced the distribution of soil organic matter and nitrogen in wetland.
Pulleman, M.M.; Six, J.; Breemen, van N.; Jongmans, A.G.
2005-01-01
Stable microaggregates can physically protect occluded soil organic matter (SOM) against decomposition. We studied the effects of agricultural management on the amount and characteristics of microaggregates and on SOM distribution in a marine loam soil in the Netherlands. Three long-term farming sys
Distribution of suspended particulate matter in the waters of eastern continental margin of India
Rao, Ch.M.
Distribution of total suspended matter (TSM) in surface and near bottom (approximately 5 m above sea bed) waters reveals a wide variation in concentration and composition. TSM varies from 0.05 to 122 mg.l/1 in surface waters, and from 0.25 top 231...
The distribution of particulate matter (PM) concentrations has an impact on human health effects and the setting of PM regulations. Since PM is commonly sampled on less than daily schedules, the magnitude of sampling errors needs to be determined. Daily PM data from Spokane, W...
Distribution of particulate organic matter in Rajapur and Vagothan estuarines (west coast of India)
Tulaskar, A.S.; Sawant, S.S.; Wagh, A.B.
The distribution of particulate organic carbon (POC), particulate carbohydrates (PCHO) and particulate proteins (PP) in the suspended particulate matter was studied. The POC, PCHO and PP concentrations ranged from 176 to 883 mu g.l/1, 115 to 647 mu...
B. Kutlu; M. Civi
2006-01-01
@@ We study the order parameter probability distribution at the critical point for the three-dimensional spin-1/2 and spin-1 Ising models on the simple cubic lattice under periodic boundary conditions.
Cyr-Racine, Francis-Yan; Zavala, Jesus; Bringmann, Torsten; Vogelsberger, Mark; Pfrommer, Christoph
2015-01-01
We formulate an effective theory of structure formation (ETHOS) that enables cosmological structure formation to be computed in almost any microphysical model of dark matter physics. This framework maps the detailed microphysical theories of particle dark matter interactions into the physical effective parameters that shape the linear matter power spectrum and the self-interaction transfer cross section of non-relativistic dark matter. These are the input to structure formation simulations, which follow the evolution of the cosmological and galactic dark matter distributions. Models with similar effective parameters in ETHOS but with different dark particle physics would nevertheless result in similar dark matter distributions. We present a general method to map an ultraviolet complete or effective field theory of low energy dark matter physics into parameters that affect the linear matter power spectrum and carry out this mapping for several representative particle models. We further propose a simple but use...
IGM Constraints from the SDSS-III/BOSS DR9 Ly-alpha Forest Flux Probability Distribution Function
Lee, Khee-Gan; Spergel, David N; Weinberg, David H; Hogg, David W; Viel, Matteo; Bolton, James S; Bailey, Stephen; Pieri, Matthew M; Carithers, William; Schlegel, David J; Lundgren, Britt; Palanque-Delabrouille, Nathalie; Suzuki, Nao; Schneider, Donald P; Yeche, Christophe
2014-01-01
The Ly$\\alpha$ forest flux probability distribution function (PDF) is an established probe of the intergalactic medium (IGM) astrophysics, especially the temperature-density relationship of the IGM. We measure the flux PDF from 3393 Baryon Oscillations Spectroscopic Survey (BOSS) quasars from SDSS Data Release 9, and compare with mock spectra that include careful modeling of the noise, continuum, and astrophysical uncertainties. The BOSS flux PDFs, measured at $\\langle z \\rangle = [2.3,2.6,3.0]$, are compared with PDFs created from mock spectra drawn from a suite of hydrodynamical simulations that sample the IGM temperature-density relationship, $\\gamma$, and temperature at mean-density, $T_0$, where $T(\\Delta) = T_0 \\Delta^{\\gamma-1}$. We find that a significant population of partial Lyman-limit systems with a column-density distribution slope of $\\beta_\\mathrm{pLLS} \\sim -2$ are required to explain the data at the low-flux end of flux PDF, while uncertainties in the mean \\lya\\ forest transmission affect the...
Shuji, OBATA; Shigeru, OHKURO; Physics Laboratory, Faculty of Science aud Engineering, Tohyo Denki University; Laboratory of Information aud System Engineering, Hachinohe Institute of Technology
1999-01-01
We have been studying chaotic behavior and chaos-like behavior in continued fractions. In this paper, such chaos-like behavior is investigated in detail. This behavior originates in the complex numbers that determine the Cauchy distributions, where cyclic terms discretely appear at isolated parameter values. The distributions are formed along with alternate tangent functions that are dominated by the cyclic terms characterized by double-Markov processes. Finally, the probability densities of ...
Dark Matter distribution in the Milky Way: microlensing and dynamical constraints
Iocco, Fabio; Bertone, Gianfranco [Institut d' Astrophysique de Paris, UMR 7095-CNRS, Univ. Pierre and Marie Curie, 98bis Bd Arago 75014 Paris (France); Pato, Miguel; Jetzer, Philippe, E-mail: iocco@iap.fr, E-mail: migpato@physik.uzh.ch, E-mail: gf.bertone@gmail.com, E-mail: jetzer@physik.uzh.ch [Institute for Theoretical Physics, University of Zürich, Winterthurerstrasse 190, 8057 Zürich (Switzerland)
2011-11-01
We show that current microlensing and dynamical observations of the Galaxy permit to set interesting constraints on the Dark Matter local density and profile slope towards the galactic centre. Assuming state-of-the-art models for the distribution of baryons in the Galaxy, we find that the most commonly discussed Dark Matter profiles (viz. Navarro-Frenk-White and Einasto) are consistent with microlensing and dynamical observations, while extreme adiabatically compressed profiles are robustly ruled out. When a baryonic model that also includes a description of the gas is adopted, our analysis provides a determination of the local Dark Matter density, ρ{sub 0} = 0.20−0.56 GeV/cm{sup 3} at 1σ, that is found to be compatible with estimates in the literature based on different techniques.
Oguri, Masamune; Falco, Emilio E
2013-01-01
We derive the average mass profile of elliptical galaxies from the ensemble of 161 strong gravitational lens systems selected from several surveys, assuming that the mass profile scales with the stellar mass and effective radius of each lensing galaxy. The total mass profile is well fitted by a power-law \\rho(r) \\propto r^\\gamma with best-fit slope \\gamma = -2.11+/-0.05. The decomposition of the total mass profile into stellar and dark matter distributions is difficult due to a fundamental degeneracy between the stellar initial mass function (IMF) and the dark matter fraction f_DM. We demonstrate that this IMF-f_DM degeneracy can be broken by adding direct stellar mass fraction measurements by quasar microlensing observations. Our best-fit model prefers the Salpeter IMF over the Chabrier IMF, and a smaller central dark matter fraction than that predicted by adiabatic contraction models.
Massicotte, Philippe; Asmala, Eero; Stedmon, Colin
2017-01-01
Based on an extensive literature survey containing more than 12,000 paired measurements of dissolved organic carbon (DOC) concentrations and absorption of chromophoric dissolved organic matter (CDOM) distributed over four continents and seven oceans, we described the global distribution and trans......Based on an extensive literature survey containing more than 12,000 paired measurements of dissolved organic carbon (DOC) concentrations and absorption of chromophoric dissolved organic matter (CDOM) distributed over four continents and seven oceans, we described the global distribution...... and transformation of dissolved organic matter (DOM) along the aquatic continuum across rivers and lakes to oceans. A strong log-linear relationship (R2Â =Â 0.92) between DOC concentration and CDOM absorption at 350Â nm was observed at a global scale, but was found to be ecosystem-dependent at local and regional...... scales. Our results reveal that as DOM is transported towards the oceans, the robustness of the observed relation decreases rapidly (R2 from 0.94 to 0.44) indicating a gradual decoupling between DOC and CDOM. This likely reflects the decreased connectivity between the landscape and DOM along the aquatic...
Quattrocchi, Carlo Cosimo; Errante, Yuri; Mallio, Carlo Augusto; Carideo, Luciano; Scarciolla, Laura; Santini, Daniele; Tonini, Giuseppe; Zobel, Bruno Beomonte
2014-11-01
The aim of this study was to test by means of a voxel-based approach the hypothesis that there is a different spatial distribution of brain metastases (BM) and white matter hyperintensities (WMH) and that the presence of WMH affects the location of BM in lung and non-lung cancer patients. Two-hundred consecutive cancer patients at first diagnosis of BM were included. Images were acquired using a 1.5 Tesla MRI system (Magnetom Avanto B13, Siemens, Erlangen, Germany). Axial FLAIR T2 weighted images and gadolinium-enhanced T1 weighted images were post-processed for segmentation, co-registration and analysis. Binary lesion masks were created for WMH and BM, using Volumes of Interest. Lesion probability maps were generated and the voxel-based lesion-symptom mapping approach was used to model each voxel and to calculate a non parametric statistics (Brunner-Munzel test) describing the differences between the groups. In the lung cancer group we found higher frequency of BM in WMH- than in WMH+ patients in the occipital lobe and the cerebellum. In contrast, BM were more frequent in the right frontal lobe in WMH+ than in WMH- patients. We suggest that there exists an inverse brain spatial distribution between WMH and BM. In lung cancer patients, the presence of WMH seems to shift the distribution of BM toward locations different than what it is expected based on primary tumor.
IGM CONSTRAINTS FROM THE SDSS-III/BOSS DR9 Lyα FOREST TRANSMISSION PROBABILITY DISTRIBUTION FUNCTION
Lee, Khee-Gan; Hennawi, Joseph F. [Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); Spergel, David N. [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States); Weinberg, David H. [Department of Astronomy and Center for Cosmology and Astro-Particle Physics, Ohio State University, Columbus, OH 43210 (United States); Hogg, David W. [Center for Cosmology and Particle Physics, New York University, 4 Washington Place, Meyer Hall of Physics, New York, NY 10003 (United States); Viel, Matteo [INAF, Osservatorio Astronomico di Trieste, Via G. B. Tiepolo 11, I-34131 Trieste (Italy); Bolton, James S. [School of Physics and Astronomy, University of Nottingham, University Park, Nottingham NG7 2RD (United Kingdom); Bailey, Stephen; Carithers, William; Schlegel, David J. [E.O. Lawrence Berkeley National Lab, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Pieri, Matthew M. [Institute of Cosmology and Gravitation, University of Portsmouth, Dennis Sciama Building, Portsmouth PO1 3FX (United Kingdom); Lundgren, Britt [Department of Astronomy, University of Wisconsin, Madison, WI 53706 (United States); Palanque-Delabrouille, Nathalie; Yèche, Christophe [CEA, Centre de Saclay, Irfu/SPP, F-91191 Gif-sur-Yvette (France); Suzuki, Nao [Kavli Institute for the Physics and Mathematics of the Universe (IPMU), The University of Tokyo, Kashiwano-ha 5-1-5, Kashiwa-shi, Chiba (Japan); Schneider, Donald P., E-mail: lee@mpia.de [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States)
2015-02-01
The Lyα forest transmission probability distribution function (PDF) is an established probe of the intergalactic medium (IGM) astrophysics, especially the temperature-density relationship of the IGM. We measure the transmission PDF from 3393 Baryon Oscillations Spectroscopic Survey (BOSS) quasars from Sloan Digital Sky Survey Data Release 9, and compare with mock spectra that include careful modeling of the noise, continuum, and astrophysical uncertainties. The BOSS transmission PDFs, measured at (z) = [2.3, 2.6, 3.0], are compared with PDFs created from mock spectra drawn from a suite of hydrodynamical simulations that sample the IGM temperature-density relationship, γ, and temperature at mean density, T {sub 0}, where T(Δ) = T {sub 0}Δ{sup γ} {sup –} {sup 1}. We find that a significant population of partial Lyman-limit systems (LLSs) with a column-density distribution slope of β{sub pLLS} ∼ – 2 are required to explain the data at the low-transmission end of transmission PDF, while uncertainties in the mean Lyα forest transmission affect the high-transmission end. After modeling the LLSs and marginalizing over mean transmission uncertainties, we find that γ = 1.6 best describes the data over our entire redshift range, although constraints on T {sub 0} are affected by systematic uncertainties. Within our model framework, isothermal or inverted temperature-density relationships (γ ≤ 1) are disfavored at a significance of over 4σ, although this could be somewhat weakened by cosmological and astrophysical uncertainties that we did not model.
Dynamical constraints on the dark matter distribution in the Milky Way
Pato, Miguel; Bertone, Gianfranco
2015-01-01
An accurate knowledge of the dark matter distribution in the Milky Way is of crucial importance for galaxy formation studies and current searches for particle dark matter. In this paper we set new dynamical constraints on the Galactic dark matter profile by comparing the observed rotation curve, updated with a comprehensive compilation of kinematic tracers, with that inferred from a wide range of observation-based morphologies of the bulge, disc and gas. The generalised Navarro-Frenk-White (NFW) and Einasto dark matter profiles are fitted to the data in order to determine the favoured ranges of local density, slope and scale radius. For a representative baryonic model, we find a local dark matter density 0.420+0.021-0.018 (2 sigma) +- 0.025 GeV/cm^3 (0.420+0.019-0.021 (2 sigma) +- 0.026 GeV/cm^3) for NFW (Einasto), where the second error is an estimate of the systematic due to baryonic modelling. The main sources of uncertainty inside and outside the solar circle are baryonic modelling and rotation curve meas...
Baryonic impact on the dark matter distribution in Milky Way-size galaxies and their satellites
Zhu, Qirong; Maji, Moupiya; Li, Yuexing; Springel, Volker; Hernquist, Lars
2015-01-01
We study the impact of baryons on the distribution of dark matter in a Milky Way-size halo by comparing a high-resolution, moving-mesh cosmological simulation with its dark matter-only counterpart. We identify three main processes related to baryons -- adiabatic contraction, tidal disruption and reionization -- which jointly shape the dark matter distribution in both the main halo and its subhalos. The relative effect of each baryonic process depends strongly on the subhalo mass. For massive subhalos with maximum circular velocity $v_{\\rm max} > 35 km/s$, adiabatic contraction increases the dark matter concentration, making these halos less susceptible to tidal disruption. For low-mass subhalos with $v_{\\rm max} < 20 km/s$, reionization effectively reduces their mass on average by $\\approx$ 30% and $v_{\\rm max}$ by $\\approx$ 20%. For intermediate subhalos with $20 km/s < v_{\\rm max} < 35 km/s$, which share a similar mass range as the classical dwarf spheroidals, strong tidal truncation induced by the...
On the Impact of Particulate Matter Distribution on Pressure Drop of Wall-Flow Particulate Filters
Vicente Bermúdez
2017-03-01
Full Text Available Wall-flow particulate filters are a required exhaust aftertreatment system to abate particulate matter emissions and meet current and incoming regulations applying worldwide to new generations of diesel and gasoline internal combustion engines. Despite the high filtration efficiency covering the whole range of emitted particle sizes, the porous substrate constitutes a flow restriction especially relevant as particulate matter, both soot and ash, is collected. The dependence of the resulting pressure drop, and hence the fuel consumption penalty, on the particulate matter distribution along the inlet channels is discussed in this paper taking as reference experimental data obtained in water injection tests before the particulate filter. This technique is demonstrated to reduce the particulate filter pressure drop without negative effects on filtration performance. In order to justify these experimental data, the characteristics of the particulate layer are diagnosed applying modeling techniques. Different soot mass distributions along the inlet channels are analyzed combined with porosity change to assess the new properties after water injection. Their influence on the subsequent soot loading process and regeneration is assessed. The results evidence the main mechanisms of the water injection at the filter inlet to reduce pressure drop and boost the interest for control strategies able to force the re-entrainment of most of the particulate matter towards the inlet channels’ end.
Liu, Jia; Hill, J. Colin; Sherwin, Blake D.; Petri, Andrea; Böhm, Vanessa; Haiman, Zoltán
2016-11-01
Unprecedentedly precise cosmic microwave background (CMB) data are expected from ongoing and near-future CMB stage III and IV surveys, which will yield reconstructed CMB lensing maps with effective resolution approaching several arcminutes. The small-scale CMB lensing fluctuations receive non-negligible contributions from nonlinear structure in the late-time density field. These fluctuations are not fully characterized by traditional two-point statistics, such as the power spectrum. Here, we use N -body ray-tracing simulations of CMB lensing maps to examine two higher-order statistics: the lensing convergence one-point probability distribution function (PDF) and peak counts. We show that these statistics contain significant information not captured by the two-point function and provide specific forecasts for the ongoing stage III Advanced Atacama Cosmology Telescope (AdvACT) experiment. Considering only the temperature-based reconstruction estimator, we forecast 9 σ (PDF) and 6 σ (peaks) detections of these statistics with AdvACT. Our simulation pipeline fully accounts for the non-Gaussianity of the lensing reconstruction noise, which is significant and cannot be neglected. Combining the power spectrum, PDF, and peak counts for AdvACT will tighten cosmological constraints in the Ωm-σ8 plane by ≈30 %, compared to using the power spectrum alone.
Liu, Guangliang; Cai, Yong
2010-11-01
The complexation of arsenic (As) with dissolved organic matter (DOM), although playing an important role in regulating As mobility and transformation, is poorly characterized, as evidenced by scarce reporting of fundamental parameters of As-DOM complexes. The complexation of arsenite (AsIII) with Aldrich humic acid (HA) at different pHs was characterized using a recently developed analytical technique to measure both free and DOM-bound As. Conditional distribution coefficient (KD), describing capacity of DOM in binding AsIII from the mass perspective, and apparent stability constant (Ks), describing stability of resulting AsIII-DOM complexes, were calculated to characterize AsIII-DOM complexation. LogKD of AsIII ranged from 3.7 to 2.2 (decreasing with increase of As/DOM ratio) at pH 5.2, from 3.6 to 2.6 at pH 7, and from 4.3 to 3.2 at pH=9.3, respectively. Two-site ligand binding models can capture the heterogeneity of binding sites and be used to calculate Ks by classifying the binding sites into strong (S1) and weak (S2) groups. LogKs for S1 sites are 7.0, 6.5, and 5.9 for pH 5.2, 7, and 9.3, respectively, which are approximately 1-2 orders of magnitude higher than for weak S2 sites. The results suggest that AsIII complexation with DOM increases with pH, as evidenced by significant spikes in concentrations of DOM-bound AsIII and in KD values at pH 9.3. In contrary to KD, logKs decreased with pH, in particular for S1 sites, probably due to the presence of negatively charged H2AsO3- and the involvement of metal-bridged AsIII-DOM complexation at pH 9.3.
Jørgensen, Linda; Stedmon, Colin; Kragh, Theis
2011-01-01
A fraction of dissolved organic matter (DOM) is able to fluoresce. This ability has been used in the present study to investigate the characteristics and distribution of different DOM fractions. A unique global dataset revealed seven different fluorescent fractions of DOM: two humic-like, four...... in the surface layer indicate the quantitative importance of photochemical degradation as a sink of the humic-like compounds. In the dark ocean (below 200 m), significant linear relationships between humic-like DOM fluorescence and microbial activity (apparent oxygen utilization, NO3- and PO43-) were found....... These observations imply a link to dark ocean microbial remineralization and indicate that the major source of humic-like compounds is microbial turnover of organic matter. The results of the present study show that the distribution of the humic-like DOM fractions is a balance between supply from continental run off...
The Cross Correlation between the Gravitational Potential and the Large Scale Matter Distribution
Madsen, S; Gottlöber, S; Müller, V; Madsen, Soeren; Doroshkevich, Andrei G.; Gottloeber, Stefan; Müller, Volker
1997-01-01
The large scale gravitational potential distribution and its influence on the large-scale matter clustering is considered on the basis of six simulations. It is found that the mean separation between zero levels of the potential along random straight lines coincides with the theoretical expectations, but it scatters largely. A strong link of the initial potential and the structure evolution is shown. It is found that the under-dense and over-dense regions correlate with regions of positive and negative gravitational potential at large redshifts. The over-dense regions arise due to a slow matter flow into the negative potential regions, where more pronounced non-linear structures appear. Such regions are related to the formation of huge super-large scale structures seen in the galaxy distribution.
Tierney, M.S.
1991-11-01
The Waste Isolation Pilot Plant (WIPP), in southeastern New Mexico, is a research and development facility to demonstrate safe disposal of defense-generated transuranic waste. The US Department of Energy will designate WIPP as a disposal facility if it meets the US Environmental Protection Agency's standard for disposal of such waste; the standard includes a requirement that estimates of cumulative releases of radioactivity to the accessible environment be incorporated in an overall probability distribution. The WIPP Project has chosen an approach to calculation of an overall probability distribution that employs the concept of scenarios for release and transport of radioactivity to the accessible environment. This report reviews the use of Monte Carlo methods in the calculation of an overall probability distribution and presents a logical and mathematical foundation for use of the scenario concept in such calculations. The report also draws preliminary conclusions regarding the shape of the probability distribution for the WIPP system; preliminary conclusions are based on the possible occurrence of three events and the presence of one feature: namely, the events attempted boreholes over rooms and drifts,'' mining alters ground-water regime,'' water-withdrawal wells provide alternate pathways,'' and the feature brine pocket below room or drift.'' Calculation of the WIPP systems's overall probability distributions for only five of sixteen possible scenario classes that can be obtained by combining the four postulated events or features.
Portail, Matthieu; Gerhard, Ortwin; Wegg, Christopher; Ness, Melissa
2017-02-01
We construct a large set of dynamical models of the galactic bulge, bar and inner disc using the made-to-measure method. Our models are constrained to match the red clump giant density from a combination of the VVV, UKIDSS and 2MASS infrared surveys together with stellar kinematics in the bulge from the BRAVA and OGLE surveys, and in the entire bar region from the ARGOS Survey. We are able to recover the bar pattern speed and the stellar and dark matter mass distributions in the bar region, thus recovering the entire galactic effective potential. We find a bar pattern speed of 39.0 ± 3.5 km s- 1 kpc- 1, placing the bar corotation radius at 6.1 ± 0.5 kpc and making the Milky Way bar a typical fast rotator. We evaluate the stellar mass of the long bar and bulge structure to be Mbar/bulge = 1.88 ± 0.12 × 1010 M⊙, larger than the mass of disc in the bar region, Minner disc = 1.29 ± 0.12 × 1010 M⊙. The total dynamical mass in the bulge volume is 1.85 ± 0.05 × 1010 M⊙. Thanks to more extended kinematic data sets and recent measurement of the bulge initial mass function, our models have a low dark matter fraction in the bulge of 17 ± 2 per cent. We find a dark matter density profile which flattens to a shallow cusp or core in the bulge region. Finally, we find dynamical evidence for an extra central mass of ∼ 0.2 × 1010 M⊙, probably in a nuclear disc or discy pseudo-bulge.
Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene
2017-03-01
Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe. Assuming that the distribution of the neutrino sources follows that of matter, we look for correlations between ``warm'' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance (including that of IceCube-Gen2), we demonstrate that sources with local density exceeding 10‑6 Mpc‑3 and neutrino luminosity Lν lesssim 1042 erg s‑1 (1041 erg s‑1) will be efficiently revealed by our method using IceCube (IceCube-Gen2). At low luminosities such as will be probed by IceCube-Gen2, the sensitivity of this analysis is superior to requiring statistically significant direct observation of a point source.
Phase-space constraints on visible and dark matter distributions in elliptical galaxies
Ciotti, L
1999-01-01
There are observational and theoretical indications that both the visible (stars) and the dark matter density distributions in elliptical galaxies increase significantly up to the galactic center. I present here some analytical results obtained with the aid of self-consistent, spherically symmetric two component galaxy models. These results suggest the possibility that this similar behavior could be a direct consequence of the structural and dynamical constraints imposed by the request of positivity of the phase-space distribution function of each density component.
Governato, F.; Zolotov, A.; Pontzen, A.; Christensen, C.; Oh, S. H.; Brooks, A. M.; Quinn, T.; Shen, S.; Wadsley, J.
2012-05-01
We examine the evolution of the inner dark matter (DM) and baryonic density profile of a new sample of simulated field galaxies using fully cosmological, Λ cold dark matter (ΛCDM), high-resolution SPH+N-Body simulations. These simulations include explicit H2 and metal cooling, star formation (SF) and supernovae-driven gas outflows. Starting at high redshift, rapid, repeated gas outflows following bursty SF transfer energy to the DM component and significantly flatten the originally 'cuspy' central DM mass profile of galaxies with present-day stellar masses in the 104.5-109.8 M⊙ range. At z= 0, the central slope of the DM density profile of our galaxies (measured between 0.3 and 0.7 kpc from their centre) is well fitted by ρDM ∝ rα with α≃-0.5 + 0.35 log10(M★/108 M⊙), where M★ is the stellar mass of the galaxy and 4 < log M★ < 9.4. These values imply DM profiles flatter than those obtained in DM-only simulations and in close agreement with those inferred in galaxies from the THINGS and LITTLE THINGS surveys. Only in very small haloes, where by z= 0 SF has converted less than ˜0.03 per cent of the original baryon abundance into stars, outflows do not flatten the original cuspy DM profile out to radii resolved by our simulations. The mass (DM and baryonic) measured within the inner 500 pc of each simulated galaxy remains nearly constant over 4 orders of magnitudes in stellar mass for M★ < 109 M⊙. This finding is consistent with estimates for faint Local Group dwarfs and field galaxies. These results address one of the outstanding problems faced by the CDM model, namely the strong discrepancy between the original predictions of cuspy DM profiles and the shallower central DM distribution observed in galaxies.
Thorsteinsson, T.; Mockford, T.; Bullard, J. E.
2015-12-01
Dust storms are the source of particulate matter in 20%-25% of the cases in which the PM10health limit is exceeded in Reykjavik; which occurred approximately 20 times a year in 2005-2010. Some of the most active source areas for dust storms in Iceland, contributing to the particulate matter load in Reykjavik, are on the south coast of Iceland, with more than 20 dust storm days per year (in 2002-2011). Measurements of particle matter concentration and size distribution were recorded at Markarfljot in May and June 2015. Markarfljot is a glacial river that is fed by Eyjafjallajokull and Myrdalsjokull, and the downstream sandur areas have been shown to be significant dust sources. Particulate matter concentration during dust storms was recorded on the sandur area using a TSI DustTrak DRX Aerosol Monitor 8533 and particle size data was recorded using a TSI Optical Particle Sizer 3330 (OPS). Wind speed was measured using cup anemometers at five heights. Particle size measured at the source area shows an extremely fine dust creation, PM1 concentration reaching over 5000 μg/m3 and accounting for most of the mass. This is potentially due to sand particles chipping during saltation instead of breaking uniformly. Dust events occurring during easterly winds were captured by two permanent PM10 aerosol monitoring stations in Reykjavik (140 km west of Markarfljot) suggesting the regional nature of these events. OPS measurements from Reykjavik also provide an interesting comparison of particle size distribution from source to city. Dust storms contribute to the particular matter pollution in Reykjavik and their small particle size, at least from this source area, might be a serious health concern.
Goñi, Miguel A.; O'Connor, Alison E.; Kuzyk, Zou Zou; Yunker, Mark B.; Gobeil, Charles; Macdonald, Robie W.
2013-09-01
As part of the International Polar Year research program, we conducted a survey of surface marine sediments from box cores along a section extending from the Bering Sea to Davis Strait via the Canadian Archipelago. We used bulk elemental and isotopic compositions, together with biomarkers and principal components analysis, to elucidate the distribution of marine and terrestrial organic matter in different regions of the North American Arctic margin. Marked regional contrasts were observed in organic carbon loadings, with the highest values (≥1 mg C m-2 sediment) found in sites along Barrow Canyon and the Chukchi and Bering shelves, all of which were characterized by sediments with low oxygen exposure, as inferred from thin layers (cutin acids) all indicate marked regional differences in the proportions of marine and terrigenous organic matter present in surface sediments. Regions such as Barrow Canyon and the Mackenzie River shelf were characterized by the highest contributions of land-derived organic matter, with compositional characteristics that suggested distinct sources and provenance. In contrast, sediments from the Canadian Archipelago and Davis Strait had the smallest contributions of terrigenous organic matter and the lowest organic carbon loadings indicative of a high degree of post-depositional oxidation.
Shimada, Mitsuhiro; Tagami, Shingo; Matsumoto, Takuma; Shimizu, Yoshifumi R; Yahiro, Masanobu
2016-01-01
We perform simultaneous analysis of (1) matter radii, (2) $B(E2; 0^+ \\rightarrow 2^+ )$ transition probabilities, and (3) excitation energies, $E(2^+)$ and $E(4^+)$, for $^{24-40}$Mg by using the beyond mean-field (BMF) framework with angular-momentum-projected configuration mixing with respect to the axially symmetric $\\beta_2$ deformation with infinitesimal cranking. The BMF calculations successfully reproduce all of the data for $r_{\\rm m}$, $B(E2)$, and $E(2^+)$ and $E(4^+)$, indicating that it is quite useful for data analysis, particularly for low-lying states. We also discuss the absolute value of the deformation parameter $\\beta_2$ deduced from measured values of $B(E2)$ and $r_{\\rm m}$. This framework makes it possible to investigate the effects of $\\beta_2$ deformation, the change in $\\beta_2$ due to restoration of rotational symmetry, $\\beta_2$ configuration mixing, and the inclusion of time-odd components by infinitesimal cranking. Under the assumption of axial deformation and parity conservation,...
Shimada, Mitsuhiro; Watanabe, Shin; Tagami, Shingo; Matsumoto, Takuma; Shimizu, Yoshifumi R.; Yahiro, Masanobu
2016-06-01
We perform simultaneous analysis of (1) matter radii, (2) B (E 2 ;0+→2+) transition probabilities, and (3) excitation energies, E (2+) and E (4+) , for Mg-4024 by using the beyond-mean-field (BMF) framework with angular-momentum-projected configuration mixing with respect to the axially symmetric β2 deformation with infinitesimal cranking. The BMF calculations successfully reproduce all of the data for rm,B (E 2 ) , and E (2+) and E (4+) , indicating that it is quite useful for data analysis; particularly for low-lying states. We also discuss the absolute value of the deformation parameter β2 deduced from measured values of B (E 2 ) and rm. This framework makes it possible to investigate the effects of β2 deformation, the change in β2 due to restoration of rotational symmetry, β2 configuration mixing, and the inclusion of time-odd components by infinitesimal cranking. Under the assumption of axial deformation and parity conservation, we clarify which effect is important for each of the three measurements and propose the kinds of BMF calculations that are practical for each of the three kinds of observables.
Dark matter distribution in X-ray luminous galaxy clusters with Emergent Gravity
Ettori, S.; Ghirardini, V.; Eckert, D.; Dubath, F.; Pointecouteau, E.
2017-09-01
We present the radial distribution of the dark matter in two massive, X-ray luminous galaxy clusters, Abell 2142 and Abell 2319, and compare it with the quantity predicted as apparent manifestation of the baryonic mass in the context of the 'Emergent Gravity' scenario, recently suggested from Verlinde. Thanks to the observational strategy of the XMM-Newton Cluster Outskirt Programme (X-COP), using the X-ray emission mapped with XMM-Newton and the Sunyaev-Zel'dovich signal in the Planck survey, we recover the gas density, temperature and thermal pressure profiles up to ∼R200, allowing us to constrain at an unprecedented level the total mass through the hydrostatic equilibrium equation. We show that, also including systematic uncertainties related to the X-ray-based mass modelling, the apparent 'dark' matter shows a radial profile that has a shape different from the traditional dark matter distribution, with larger discrepancies (by a factor of 2-3) in the inner (r < 200 kpc) cluster's regions and a remarkable agreement only across R500.
Baryonic and dark matter distribution in cosmological simulations of spiral galaxies
Mollitor, Pol; Teyssier, Romain
2014-01-01
We study three high resolution cosmological hydrodynamical simulations of Milky Way-sized halos including a comparison with the corresponding DM-only counterparts performed with the adaptive mesh refinement code RAMSES. We analyse the stellar and the gas distribution and find one of our simulated galaxies with interesting Milky Way-like features with regard to several observational tests. Thanks to consistently tuned star formation rate and supernovae feedback, we manage to obtain an extended disk and a flat rotation curve with a circular velocity and a dark matter density in the solar neighbourhood in agreement with observations. With a careful look at the derivation of the stellar-to-halo mass ratio, we also obtain competitive values for this criterion. Concerning the dark matter distribution, we explicitly show the interaction with the baryons and show how the dark matter is first contracted by star formation and then cored by feedback processes. Analysing the clump spectrum, we find a shift in mass with r...
STATISTICAL ANALYSIS, REPORTS), (*PROBABILITY, REPORTS), INFORMATION THEORY, DIFFERENTIAL EQUATIONS, STATISTICAL PROCESSES, STOCHASTIC PROCESSES, MULTIVARIATE ANALYSIS, DISTRIBUTION THEORY , DECISION THEORY, MEASURE THEORY, OPTIMIZATION
On the core-halo distribution of dark matter in galaxies
Ruffini, Remo; Rueda, Jorge Armando
2014-01-01
We investigate the distribution of dark matter in galaxies by solving the equations of equilibrium of a self-gravitating system of massive fermions (`inos') at selected temperatures and degeneracy parameters within general relativity. The most general solutions present, as a function of the radius, a segregation of three physical regimes: 1) an inner core of almost constant density governed by degenerate quantum statistics; 2) an intermediate region with a sharply decreasing density distribution followed by an extended plateau, implying quantum corrections; 3) a decreasing density distribution $\\rho\\propto r^{-2}$ leading to flat rotation curves fulfilling the classical Boltzmann statistics. The mass of the inos is determined as an eigenfunction of the mass of the inner quantum cores. We compare and contrast this mass value with the lower limit on the particle mass by Tremaine and Gunn (1979), and show that the latter is approached for the less degenerate quantum cores in agreement with the fixed halo observa...
Bourgeois, Solveig; Kerhervé, Philippe; Calleja, Maria Ll.; Many, Gaël; Morata, Nathalie
2016-12-01
With climate change, the strong seasonality and tight pelagic-benthic coupling in the Arctic is expected to change in the next few decades. It is currently unclear how the benthos will be affected by changes of environmental conditions such as supplies of organic matter (OM) from the water column. In the last decade, Kongsfjorden (79°N), a high Arctic fjord in Svalbard influenced by several glaciers and Atlantic water inflow, has been a site of great interest owing to its high sensitivity to climate change, evidenced by a reduction in ice cover and an increase in melting freshwater. To investigate how spatial and seasonal changes in vertical fluxes can impact the benthic compartment of Kongsfjorden, we studied the organic matter characteristics (in terms of quantity and quality) and prokaryotic distribution in sediments from 3 stations along a transect extending from the glacier into the outer fjord in 4 different seasons (spring, summer, autumn and winter) in 2012-2013. The biochemical parameters used to describe the sedimentary organic matter were organic carbon (OC), total nitrogen, bulk stable isotope ratios, pigments (chorophyll-a and phaeopigments) and biopolymeric carbon (BPC), which is the sum of the main macromolecules, i.e. lipids, proteins and carbohydrates. Prokaryotic abundance and distribution were estimated by 4‧,6-diamidino-2-phenylindole (DAPI) staining. This study identifies a well-marked quantitative gradient of biogenic compounds throughout all seasons and also highlights a discrepancy between the quantity and quality of sedimentary organic matter within the fjord. The sediments near the glacier were organic-poor (Bacterial total cell numbers in sediments of Kongsfjorden were < 2 × 108 cells ml- 1 and the prokaryotic community structure was strongly influenced by the marked environmental biogenic gradients. Overall, the spatial variability prevailed over the seasonal variability in sediments of Kongsfjorden suggesting that glacier inputs
The distribution of dark and luminous matter inferred from extended rotation curves
Bottema, Roelof; Pestaña, José Luis G.
2015-04-01
A better understanding of the formation of mass structures in the Universe can be obtained by determining the amount and distribution of dark and luminous matter in spiral galaxies. To investigate such matters a sample of 12 galaxies, most with accurate distances, has been composed of which the luminosities are distributed regularly over a range spanning two and a half orders of magnitude. Of the observed high quality and extended rotation curves of these galaxies decompositions have been made, for four different schemes, each with two free parameters. For a `maximum disc fit' the rotation curves can be well matched, yet a large range of mass-to-light (M/L) ratios for the individual galaxies is required. For the alternative gravitational theory of MOND (Modified Newtonian Dynamics) the rotation curves can be explained if the fundamental parameter associated with MOND is allowed as a free parameter. Fixing that parameter leads to a disagreement between the predicted and observed rotation curves for a few galaxies. When cosmologically motivated NFW dark matter haloes are assumed, the rotation curves for the least massive galaxies can, by no means, be reproduced; cores are definitively preferred over cusps. Finally, decompositions have been made for a pseudo-isothermal halo combined with a universal M/L ratio. For the latter, the light of each galactic disc and bulge has been corrected for extinction and has been scaled by the effect of stellar population. This scheme can successfully explain the observed rotations and leads to submaximum disc mass contributions. Properties of the resulting dark matter haloes are described and a ratio between dark and baryonic mass of ˜9 for the least, and of ˜5, for the most luminous galaxies has been determined, at the outermost measured rotation.
Amri, Chahrazade El; Maurel, Marie-Christine; Sagon, Gérard; Baron, Marie-Hélène
2005-07-01
The carbonaceous Murchison chondrite is one of the most studied meteorites. It is considered to be an astrobiology standard for detection of extraterrestrial organic matter. Considerable work has been done to resolve the elemental composition of this meteorite. Raman spectroscopy is a very suitable technique for non-destructive rapid in situ analyses to establish the spatial distribution of carbonaceous matter. This report demonstrates that Raman cartography at a resolution of 1 μm 2 can be performed. Two-dimensional distribution of graphitised carbon, amorphous carbonaceous matter and minerals were obtained on 100 μm 2 maps. Maps of the surface of native stones and of a powdered sample are compared. Graphitic and amorphous carbonaceous domains are found to be highly overlapping in all tested areas at the surface of the meteorite and in its interior as well. Pyroxene, olivine and iron oxide grains are embedded into this mixed carbonaceous material. The results show that every mineral grain with a size of less than a few μm 2 is encased in a thin carbonaceous matrix, which accounts for only 2.5 wt.%. This interstitial matter sticks together isolated mineral crystallites or concretions, including only very few individualized graphitised grains. Grinding separates the mineral particles but most of them retain their carbonaceous coating. This Raman study complements recent findings deduced from other spatial analyses performed by microprobe laser-desorption laser-ionisation mass spectrometry (μL 2MS), transmission electron microscopy (TEM) and scanning transmission X-ray microscopy (STXM).
Baiamonte, Giorgio; Singh, Vijay P.
2016-04-01
extended to the case of pervious hillslopes, accounting for infiltration. In particular, an analytical solution for the time of concentration for overland flow on a rectangular plane surface was derived using the kinematic wave equation under the Green-Ampt infiltration (Baiamonte and Singh, 2015). The objective of this work is to apply the latter solution to determine the probability distribution of hillslope peak discharge by combining it with the familiar rainfall duration-intensity-frequency approach. References Agnese, C., Baiamonte, G., and Corrao, C. (2001). "A simple model of hillslope response for overland flow generation". Hydrol. Process., 15, 3225-3238, ISSN: 0885-6087, doi: 10.1002/hyp.182. Baiamonte, G., and Agnese, C. (2010). "An analytical solution of kinematic wave equations for overland flow under Green-Ampt infiltration". J. Agr. Eng., vol. 1, p. 41-49, ISSN: 1974-7071. Baiamonte, G., and Singh, V.P., (2015). "Analytical solution of kinematic wave time of concentration for overland flow under Green-Ampt Infiltration." J Hydrol E - ASCE, DOI: 10.1061/(ASCE)HE.1943-5584.0001266. Robinson, J.S., and Sivapalan, M. (1996). "Instantaneous response functions of overland flow and subsurface stormflow for catchment models". Hydrol. Process., 10, 845-862. Singh, V.P. (1976). "Derivation of time of concentration". J. of Hydrol., 30, 147-165. Singh, V.P., (1996). Kinematic-Wave Modeling in Water Resources: Surface-Water Hydrology. John Wiley & Sons, Inc., New York, 1399 pp.
Wei, Xiao-Rong; Shao, Ming-An
2009-11-01
Soil chemical properties play important roles in soil ecological functioning. In this study, 207 surface soil (0-20 cm) samples were collected from different representative landscape units in a gully watershed of the Loess Plateau to examine the distribution characteristics of soil pH, cation exchange capacity (CEC) and organic matter, and their relations to land use type, landform, and soil type. The soil pH, CEC and organic matter content ranged from 7.7 to 8.6, 11.9 to 28.7 cmol x kg(-1), and 3.0 to 27.9 g x kg(-1), and followed normal distribution, log-normal distribution, and negative binomial distribution, respectively. These three properties were significantly affected by land use type, landform, and soil type. Soil CEC and organic matter content were higher in forestland, grassland and farmland than in orchard land, and soil pH was lower in forestland than in other three land use types. Soil pH, CEC and organic matter content were higher in plateau land and sloping land than in gully bottom and terrace land. Soil CEC and organic matter content were higher in dark loessial soil and rebified soil, while soil pH was higher in yellow loessial soil. Across all the three landscape factors, soil CEC and organic matter content showed the similar distribution pattern, but an opposite distribution pattern was observed for soil pH.
Meshalkina, Joulia; Belousova, Nataliya; Vasenev, Ivan
2015-04-01
Boreal forest ecosystems play one of the key roles in the Global Change challenges responses. The soil carbon stocks are principal regulators of their environmental functions. Boreal forest soil cover is characterized by mutually increased spatial variability in soil organic matter content (SOMC) that one need to take into attention in its current and future environmental functions state assessment including the potential of regional soil organic matter stocks changes due to Global Change and inverse ones. Knowledge of the regional regularities in SOMC profile vertical distribution allows improving their soil environmental functions prediction land quality evaluation. More than 900 profiles of SOMC distribution were studied using the database Boreal that contains data on Russian boreal soils developed in drained conditions on loamy soil forming rocks. These soil profiles belong to seven main types of forest soils of Russian classification and six major regions of Russia. The predomination of accumulation profile type was observed for all cases. Thus the vertical distribution of OMC in the profiles of boreal soils can be described as follow: the layer of maximum OMC is replaced by the layer of dramatic OMC reduction; then the layer of minimal OMC extends up to 2.5 m. The layer of maximal OMC accumulation has the low depth of 5-15 cm. It carried out in different genetic horizons: A1, A1A2, A2, B, AB; sometimes it captures the A2B horizon or the upper part of the illuvial horizon. The OMC in this layer increases from the northern taiga to the southern taiga and from the European part of Russia to Siberia. The second layer is characterized by its depth and by the gradient of OMC decreasing. A great variety of the both parameters is observed. The layer of the sharp OMC fall most often fits with the eluvial horizons A2 or А2В or even the upper part of the Вt (textural) or Bm (metamorphic) horizons. The layer of permanently small OMC may begin in any genetic horizon
Quirós-Collazos, Lucía; Pedrosa-Pàmies, Rut; Sanchez-Vidal, Anna; Guillén, Jorge; Duran, Ruth; Cabelloa, Patricia
2017-04-01
Continental shelves are recognized to play a key role in the biogeochemical cycle of carbon, linking terrestrial and marine carbon reservoirs. In this study we investigate the physical and biogeochemical processes that control the source, transport and fate of organic carbon (OC) in the continental shelf off Barcelona city, in the NW Mediterranean Sea. Surface sediment samples were collected from depths of 10-40 m during late summer and autumn 2012. Grain size and biogeochemical parameters such as OC, its stable isotope δ13C, total nitrogen (TN) and OC/TN ratios were analysed in size-fractionated sediments. The influence of environmental factors over the study area was determined using hydrological and oceanographic time series, together with video images of the Barcelona coast line and nearshore region. We have found a wide range of OC contents, from 0.13 to 8.68%, depending on water depth and sediment particle size. The highest OC concentration was always found in the clay fraction (63 μm) that contained terrestrial plant debris. Wave activity, discharge of the Besòs River and the ;Espigó de Ginebra; outfall were the main mechanisms controlling the sorting of sediments by their grain size and thus the distribution of OC in the inner shelf off Barcelona. In addition, we observed that the organic matter in clay particles was progressively degraded seawards, probably because these particles remain suspended in the water column much more time compared to those that are heavier and, therefore, they are exposed for longer time periods to oxygenated conditions. Both OC/TN ratios and δ13C values found suggest that the organic matter preserved was predominantly land supplied.
Hewson, Alex C; Bauer, Johannes
2010-03-24
We show that information on the probability density of local fluctuations can be obtained from a numerical renormalization group calculation of a reduced density matrix. We apply this approach to the Anderson-Holstein impurity model to calculate the ground state probability density ρ(x) for the displacement x of the local oscillator. From this density we can deduce an effective local potential for the oscillator and compare its form with that obtained from a semiclassical approximation as a function of the coupling strength. The method is extended to the infinite dimensional Holstein-Hubbard model using dynamical mean field theory. We use this approach to compare the probability densities for the displacement of the local oscillator in the normal, antiferromagnetic and charge ordered phases.
Lawson, Michael; Polya, David A.; Boyce, Adrian J.; Bryant, Charlotte; Ballentine, Christopher J.
2016-04-01
Biogeochemical processes that utilize dissolved organic carbon are widely thought to be responsible for the liberation of arsenic from sediments to shallow groundwater in south and southeast Asia. The accumulation of this known carcinogen to hazardously high concentrations has occurred in the primary source of drinking water in large parts of densely populated countries in this region. Both surface and sedimentary sources of organic matter have been suggested to contribute dissolved organic carbon in these aquifers. However, identification of the source of organic carbon responsible for driving arsenic release remains enigmatic and even controversial. Here, we provide the most extensive interrogation to date of the isotopic signature of ground and surface waters at a known arsenic hotspot in Cambodia. We present tritium and radiocarbon data that demonstrates that recharge through ponds and/or clay windows can transport young, surface derived organic matter into groundwater to depths of 44 m under natural flow conditions. Young organic matter dominates the dissolved organic carbon pool in groundwater that is in close proximity to these surface water sources and we suggest this is likely a regional relationship. In locations distal to surface water contact, dissolved organic carbon represents a mixture of both young surface and older sedimentary derived organic matter. Ground-surface water interaction therefore strongly influences the average dissolved organic carbon age and how this is distributed spatially across the field site. Arsenic mobilization rates appear to be controlled by the age of dissolved organic matter present in these groundwaters. Arsenic concentrations in shallow groundwaters (20 m) groundwaters. We suggest that, while the rate of arsenic release is greatest in shallow aquifer sediments, arsenic release also occurs in deeper aquifer sediments and as such remains an important process in controlling the spatial distribution of arsenic in the
Freund, John E
1993-01-01
Thorough, lucid coverage of permutations and factorials, probabilities and odds, frequency interpretation, mathematical expectation, decision making, postulates of probability, rule of elimination, binomial distribution, geometric distribution, standard deviation, law of large numbers, and much more. Exercises with some solutions. Summary. Bibliography. Includes 42 black-and-white illustrations. 1973 edition.
Berman, S. L.; Frey, K. E.; Shake, K. L.; Cooper, L. W.; Grebmeier, J. M.
2014-12-01
Dissolved organic matter (DOM) plays an important role in marine ecosystems as both a carbon source for the microbial food web (and thus a source of CO2 to the atmosphere) and as a light inhibitor in marine environments. The presence of chromophoric dissolved organic matter (CDOM; the optically active portion of total DOM) can have significant controlling effects on transmittance of sunlight through the water column and therefore on primary production as well as the heat balance of the upper ocean. However, CDOM is also susceptible to photochemical degradation, which decreases the flux of solar radiation that is absorbed. Knowledge of the current spatial and temporal distribution of CDOM in marine environments is thus critical for understanding how ongoing and future changes in climate may impact these biological, biogeochemical, and physical processes. We describe the quantity and quality of CDOM along five key productive transects across a developing Distributed Biological Observatory (DBO) in the Pacific Arctic region. The samples were collected onboard the CCGS Sir Wilfred Laurier in July 2013 and 2014. Monitoring of the variability of CDOM along transects of high productivity can provide important insights into biological and biogeochemical cycling across the region. Our analyses include overall concentrations of CDOM, as well as proxy information such as molecular weight, lability, and source (i.e., autochthonous vs. allochthonous) of organic matter. We utilize these field observations to compare with satellite-derived CDOM concentrations determined from the Aqua MODIS satellite platform, which ultimately provides a spatially and temporally continuous synoptic view of CDOM concentrations throughout the region. Examining the current relationships among CDOM, sea ice variability, biological productivity, and biogeochemical cycling in the Pacific Arctic region will likely provide key insights for how ecosystems throughout the region will respond in future
Numerical Simulation of NOx and Particular Matter Distribution from Urban Street in Beijing China
Liu Xiang
2016-01-01
Full Text Available In recent years, air quality has been a nation-wide issue in Beijing, China with frequent appearance of haze. Disappointingly, most detectors and sensors are mounted in suburb regions that are over ten kilometers away from the center of Beijing. Additionally, most researches are focusing on a general air flow in large scale instead of a specific community. It is important to be aware of the air quality at living communities in urban areas due to a large population. In this study, computational fluid dynamic (CFD technologies were used to analyze the distribution of NOx and particular matter (PM from urban street in Beijing to evaluate the air quality at a certain building. As most air pollutions are caused by vehicle emissions in urban areas, containing NOx and particular matters, traffic emissions were considered as the only source of contaminants in this study. Commercial software ANSYS Fluent® was used to simulate a number of dispersion scenarios under different boundary conditions to quantify the pollution level for the selected living environment in Beijing, China. Mass fraction, isosurfaces and streamlines of contaminant were presented to analyze the pollution distribution around the area
Testing the radial dependence of dark matter distribution in M33
Fune, Ernesto Lopez; Corbelli, Edvige
2016-01-01
The stellar and gaseous mass distributions, as well as the extended rotation curve in the nearby galaxy M33 are used to derive the radial distribution of dark matter density in the halo and to test cosmological models of galaxy formation and evolution. Two methods are examined to constrain dark mass density profiles. The first one deals directly with the fitting of the rotation curve data in the range of galacto-centric distances $0.24\\,\\text{kpc}\\leq r\\leq22.72\\,\\text{kpc}$. As found in a previous paper Corbelli 2014 et. al. and using the results of recent collisionless $\\Lambda-$CDM numerical simulations, we confirm that the Navarro-Frenkel-White (NFW) dark matter profile provides a better fit to the rotation curve data than the cored Burkert (URC) profile. The second method relies on the local equation of centrifugal equilibrium and on the rotation curve slope. In the aforementioned range of distances, we fit an empirical velocity profile using a function which has a rational dependence on the radius. Foll...
Effects of Soil Moisture on Dynamic Distribution of Dry Matter Between Winter Wheat Root and Shoot
CHEN Xiao-yuan; LIU Xiao-ying; LUO Yuan-pei
2003-01-01
The dynamic relationship of dry matter accumulation and distribution between winter wheatroot and shoot was studied under different soil water conditions. The dry matter accumulation in root wasgreatly influenced by water stress, so as to the final root weight of the treatment with 40 % field moisturecapacity (FMC) was less than 1/4 of that of the treatment with 80 % FMC on average. Water stress duringthe 3-leaf stage to the tillering stage had the greatest influence on root, and the influence of water stressduring the jointing stage to the booting stage on shoot was greater than root. However, water stress duringthe tillering stage to the booting stage had a balanced effect on root and shoot, and the proportion of drymatter that distributed to root and shoot was almost the same after rewatering. Water recovery during thejointing stage to booting stage could promote R/S, but the increasing degree was related to the duration ofwater limitation. Soil water condition had the lowest effect on R/S during the flowering stage to the fillingstage and the maximal effect on R/S during the jointing stage to the heading stage, R/S of 40% FMCtreatment was 20.93 and 126.09 % higher than that of 60 % FMC and 80 % FMC treatments respectivelyat this period.
The distribution of dark and luminous matter inferred from extended rotation curves
Bottema, Roelof
2015-01-01
A better understanding of the formation of mass structures in the universe can be obtained by determining the amount and distribution of dark and luminous matter in spiral galaxies. To investigate such matters a sample of 12 galaxies, most with accurate distances, has been composed of which the luminosities are distributed regularly over a range spanning 2.5 orders of magnitude. Of the observed high quality and extended rotation curves of these galaxies decompositions have been made, for four different schemes, each with two free parameters. For a "maximum disc fit" the rotation curves can be well matched, yet a large range of mass-to-light ratios for the individual galaxies is required. For the alternative gravitational theory of MOND the rotation curves can be explained if the fundamental parameter associated with MOND is allowed as a free parameter. Fixing that parameter leads to a disagreement between the predicted and observed rotation curves for a few galaxies. When cosmologically motivated NFW dark mat...
Kang, Xi
2015-01-01
The distribution of galaxies displays anisotropy on different scales and it is often referred as galaxy alignment. To understand the origin of galaxy alignments on small scales, one must investigate how galaxies were accreted in the early universe and quantify their primordial anisotropic at the time of accretion. In this paper we use N-body simulations to investigate the accretion of dark matter subhaloes, focusing on their alignment with the host halo shape and the orientation of mass distribution on large scale, defined using the hessian matrix of the density field. The large/small (e1/e3) eigenvalues of the hessian matrix define the fast/slow collapse direction of dark matter on large scale. We find that: 1) the halo major axis is well aligned with the e3 (slow collapse) direction, and it is stronger for massive haloes; 2) subhaloes are predominately accreted along the major axis of the host halo, and the alignment increases with the host halo mass. Most importantly, this alignment is universal; 3) accret...
Nielsen, Bjørn Gilbert; Jensen, Morten Østergaard; Bohr, Henrik
2003-01-01
The structure of enkephalin, a small neuropeptide with five amino acids, has been simulated on computers using molecular dynamics. Such simulations exhibit a few stable conformations, which also have been identified experimentally. The simulations provide the possibility to perform cluster analysis...... in the space defined by potentially pharmacophoric measures such as dihedral angles, side-chain orientation, etc. By analyzing the statistics of the resulting clusters, the probability distribution of the side-chain conformations may be determined. These probabilities allow us to predict the selectivity...
Ezure, Hideo
1988-09-01
Effective combination of measured data with theoretical analysis has permitted deriving a mehtod for more accurately estimating the power distribution in BWRs. Use is made of least squares method for the combination between relationship of the power distribution with measured values and the model used in FLARE or in the three-dimensional two-group diffusion code. Trial application of the new method to estimating the power distribution in JPDR-1 has proved the method to provide reliable results.
Kroupa, Pavel
2016-01-01
The spatial arrangement of galaxies (of satellites on a scale of 100kpc) as well as their three-dimensional distribution in galaxy groups such as the Local Group (on a scale of 1Mpc), the distribution of galaxies in the nearby volume of galaxies (on a scale of 8Mpc) and in the nearby Universe (on a scale of 1Gpc) is considered. There is further evidence that the CMB shows irregularities and for anisotropic cosmic expansion. The overall impression one obtains, given the best data we have, is matter to be arranged as not expected in the dark-matter based standard model of cosmology (SMoC). There appears to be too much structure, regularity and organisation. Dynamical friction on the dark matter halos is a strong direct test for the presence of dark matter particles, but this process does not appear to be operative in the real Universe. This evidence suggests strongly that dynamically relevant dark matter does not exist and therefore cosmology remains largely not understood theoretically. More-accepted awareness...
Luis Vicente Chamorro Marcillllo
2013-06-01
Full Text Available Engineering, within its academic and application forms, as well as any formal research work requires the use of statistics and every inferential statistical analysis requires the use of values of probability distribution functions that are generally available in tables. Generally, management of those tables have physical problems (wasteful transport and wasteful consultation and operational (incomplete lists and limited accuracy. The study, “Probability distribution function values in mobile phones”, permitted determining – through a needs survey applied to students involved in statistics studies at Universidad de Nariño – that the best known and most used values correspond to Chi-Square, Binomial, Student’s t, and Standard Normal distributions. Similarly, it showed user’s interest in having the values in question within an alternative means to correct, at least in part, the problems presented by “the famous tables”. To try to contribute to the solution, we built software that allows immediately and dynamically obtaining the values of the probability distribution functions most commonly used by mobile phones.
Łokas, Ewa L.; Mamon, Gary A.
2003-08-01
We study velocity moments of elliptical galaxies in the Coma cluster using Jeans equations. The dark matter distribution in the cluster is modelled by a generalized formula based upon the results of cosmological N-body simulations. Its inner slope (cuspy or flat), concentration and mass within the virial radius are kept as free parameters, as well as the velocity anisotropy, assumed independent of position. We show that the study of line-of-sight velocity dispersion alone does not allow us to constrain the parameters. By a joint analysis of the observed profiles of velocity dispersion and kurtosis, we are able to break the degeneracy between the mass distribution and velocity anisotropy. We determine the dark matter distribution at radial distances larger than 3 per cent of the virial radius and we find that the galaxy orbits are close to isotropic. Due to limited resolution, different inner slopes are found to be consistent with the data and we observe a strong degeneracy between the inner slope α and concentration c; the best-fitting profiles have the two parameters related with c= 19-9.6α. Our best-fitting Navarro-Frenk-White profile has concentration c= 9, which is 50 per cent higher than standard values found in cosmological simulations for objects of similar mass. The total mass within the virial radius of 2.9h-170 Mpc is 1.4 × 1015h-170 Msolar (with 30 per cent accuracy), 85 per cent of which is dark. At this distance from the cluster centre, the mass-to-light ratio in the blue band is 351h70 solar units. The total mass within the virial radius leads to estimates of the density parameter of the Universe, assuming that clusters trace the mass-to-light ratio and baryonic fraction of the Universe, with Ω0= 0.29 +/- 0.1.
Monte Carlo transition probabilities
Lucy, L. B.
2001-01-01
Transition probabilities governing the interaction of energy packets and matter are derived that allow Monte Carlo NLTE transfer codes to be constructed without simplifying the treatment of line formation. These probabilities are such that the Monte Carlo calculation asymptotically recovers the local emissivity of a gas in statistical equilibrium. Numerical experiments with one-point statistical equilibrium problems for Fe II and Hydrogen confirm this asymptotic behaviour. In addition, the re...
Samejima, Masaki; Negoro, Keisuke; Mitsukuni, Koshichiro; Akiyoshi, Masanori
We propose a finding method of business risk factors on qualitative and quantitative hybrid simulation in time series. Effect ratios of qualitative arcs in the hybrid simulation vary output values of the simulation, so we define effect ratios causing risk as business risk factors. Finding business risk factors in entire ranges of effect ratios is time-consuming. It is considered that probability distributions of effect ratios in present time step and ones in previous time step are similar, the probability distributions in present time step can be estimated. Our method finds business risk factors in only estimated ranges effectively. Experimental results show that a precision rate and a recall rate are 86%, and search time is decreased 20% at least.
ZHENGGuizhen; JIANGXiulan; HANShuzong
2004-01-01
The joint distribution of wave heights and periods of individual waves is usually approximated by the joint distribution of apparent wave heights and periods. However there is difference between them. This difference is addressed and the theoretical joint distributions of apparent wave heights and periods due to Longuet-Higgins and Sun are modified to give more reasonable representations of the joint distribution of wave heights and periods of individual waves. The modification has overcome an inherent drawback of these joint PDFs that the mean wave period is infinite. A comparison is made between the modified formulae and the field data of Goda, which shows that the new formulae consist with the measurement better than their original counterparts.
Bourgeois, Solveig
2016-08-23
With climate change, the strong seasonality and tight pelagic-benthic coupling in the Arctic is expected to change in the next few decades. It is currently unclear how the benthos will be affected by changes of environmental conditions such as supplies of organic matter (OM) from the water column. In the last decade, Kongsfjorden (79°N), a high Arctic fjord in Svalbard influenced by several glaciers and Atlantic water inflow, has been a site of great interest owing to its high sensitivity to climate change, evidenced by a reduction in ice cover and an increase in melting freshwater. To investigate how spatial and seasonal changes in vertical fluxes can impact the benthic compartment of Kongsfjorden, we studied the organic matter characteristics (in terms of quantity and quality) and prokaryotic distribution in sediments from 3 stations along a transect extending from the glacier into the outer fjord in 4 different seasons (spring, summer, autumn and winter) in 2012–2013. The biochemical parameters used to describe the sedimentary organic matter were organic carbon (OC), total nitrogen, bulk stable isotope ratios, pigments (chorophyll-a and phaeopigments) and biopolymeric carbon (BPC), which is the sum of the main macromolecules, i.e. lipids, proteins and carbohydrates. Prokaryotic abundance and distribution were estimated by 4′,6-diamidino-2-phenylindole (DAPI) staining. This study identifies a well-marked quantitative gradient of biogenic compounds throughout all seasons and also highlights a discrepancy between the quantity and quality of sedimentary organic matter within the fjord. The sediments near the glacier were organic-poor (< 0.3%OC), however the high primary productivity in the water column displayed during spring was reflected in summer sediments, and exhibited higher freshness of material at the inner station compared to the outer basin (means C-chlorophyll-a/OC ~ 5 and 1.5%, respectively). However, sediments at the glacier front were depleted
Chen, Ching-Tai; Peng, Hung-Pin; Jian, Jhih-Wei; Tsai, Keng-Chang; Chang, Jeng-Yih; Yang, Ei-Wen; Chen, Jun-Bo; Ho, Shinn-Ying; Hsu, Wen-Lian; Yang, An-Suei
2012-01-01
Protein-protein interactions are key to many biological processes. Computational methodologies devised to predict protein-protein interaction (PPI) sites on protein surfaces are important tools in providing insights into the biological functions of proteins and in developing therapeutics targeting the protein-protein interaction sites. One of the general features of PPI sites is that the core regions from the two interacting protein surfaces are complementary to each other, similar to the interior of proteins in packing density and in the physicochemical nature of the amino acid composition. In this work, we simulated the physicochemical complementarities by constructing three-dimensional probability density maps of non-covalent interacting atoms on the protein surfaces. The interacting probabilities were derived from the interior of known structures. Machine learning algorithms were applied to learn the characteristic patterns of the probability density maps specific to the PPI sites. The trained predictors for PPI sites were cross-validated with the training cases (consisting of 432 proteins) and were tested on an independent dataset (consisting of 142 proteins). The residue-based Matthews correlation coefficient for the independent test set was 0.423; the accuracy, precision, sensitivity, specificity were 0.753, 0.519, 0.677, and 0.779 respectively. The benchmark results indicate that the optimized machine learning models are among the best predictors in identifying PPI sites on protein surfaces. In particular, the PPI site prediction accuracy increases with increasing size of the PPI site and with increasing hydrophobicity in amino acid composition of the PPI interface; the core interface regions are more likely to be recognized with high prediction confidence. The results indicate that the physicochemical complementarity patterns on protein surfaces are important determinants in PPIs, and a substantial portion of the PPI sites can be predicted correctly with
Nitsche, Ludwig C.; Nitsche, Johannes M.; Brenner, Howard
1988-01-01
The sedimentation and diffusion of a nonneutrally buoyant Brownian particle in vertical fluid-filled cylinder of finite length which is instantaneously inverted at regular intervals are investigated analytically. A one-dimensional convective-diffusive equation is derived to describe the temporal and spatial evolution of the probability density; a periodicity condition is formulated; the applicability of Fredholm theory is established; and the parameter-space regions are determined within which the existence and uniqueness of solutions are guaranteed. Numerical results for sample problems are presented graphically and briefly characterized.
Xiao Bingjia; Shinichiro Kado; Shin Kajita; Daisuge Yamasaki; Satoru Tanaka
2005-01-01
A novel fitting procedure is proposed for a better determination of H2 rovibrational distribution from the Fulcher-a band spectroscopy. We have recalculated the transition probabilities and the results show that they deviate from Franck-Condon approximation especially for the non-diagonal transitions. We also calculated the complete sets of vibrationally resolved crosssections for electron impact d3∏u- X3∑g transition based on the semi-classical Gryzinski theory.An example of experimental study confirms that current approach provides a tool for a better diagnostics of H2 rovibrational distribution in electronic ground state.
The prolate shape of the galactic dark-matter halo
Helmi, A; Spooner, NJC; Kudryavtsev,
2005-01-01
Knowledge of the distribution of dark-matter in our Galaxy plays a crucial role in the interpretation of dark-matter detection experiments. I will argue here that probably the best way of constraining the properties of the dark-matter halo is through astrophysical observations. These provide
The prolate shape of the galactic dark-matter halo
Helmi, A; Spooner, NJC; Kudryavtsev,
2005-01-01
Knowledge of the distribution of dark-matter in our Galaxy plays a crucial role in the interpretation of dark-matter detection experiments. I will argue here that probably the best way of constraining the properties of the dark-matter halo is through astrophysical observations. These provide constra
Wang, Xiao; Bian, Changwei; Bi, Rong; Jiang, Wensheng; Zhang, Hua; Zhang, Xueqing
2017-02-01
Laser in situ scattering and transmissometry (LISST) significantly improves our ability to assess particle size distribution (PSD) in seawater, while wide-ranging measurements of the organic-inorganic compositions of suspended particulate matters (SPM) are still difficult by using traditional methods such as microscopy. In this study, PSD properties and SPM compositions around the Bohai Strait (China) were investigated based on the measurements by LISST in combination with hydro-biological parameters collected from a field survey in summer 2014. Four typical PSD shapes were found in the region, namely right-peak, left-peak, double-peak and negative-skew shapes. The double-peak and negative-skew shapes may interconvert into each other along with strong hydrodynamic variation. In the upper layer of the Bohai Sea, organic particles were in the majority, with inorganic particles rarely observed. In the bottom layer, SPM were the mixture of organic and inorganic matters. LISST provided valuable baseline information on size-resolved organic-inorganic compositions of SPM: the size of organic particles mainly ranged from 4 to 20 μm and 40 to 100 μm, while most SPM ranging from 20 to 40 μm were composed of inorganic sediment.
Ali, Mohamed Yasreen Mohamed; Hanafiah, Marlia Mohd; Latif, Mohd Talib
2016-11-01
This study analyses the composition and distribution of particulate matter (PM10) in the Biology department building, in UKM. PM10 were collected using SENSIDYNE Gillian GilAir-5 Personal Air Sampling System, a low-volume sampler, whereas the concentration of heavy metals was determined using Inductively coupled plasma-mass spectrometry (ICP-MS). The concentration of PM10 recorded in the mechanically ventilated building ranges from 89 µgm-3 to 910 µgm-3. The composition of the selected heavy metals in PM10 were dominated by zinc, followed by copper, lead and cadmium. It was found that the present of indoor-related particulate matter were originated from the poorly maintained ventilation system, the activity of occupants and typical office equipments such as printers and photocopy machines. The haze event occured during sampling periods was also affected the PM10 concentration in the building. This results can serve as a starting point to assess the potential human health damage using the life cycle impact assessment, expressed in term of disability adjusted life year (DALY).
Ganeshan, Balaji [University of Sussex, Falmer, Clinical Imaging Sciences Centre, Brighton and Sussex Medical School, Brighton (United Kingdom); University of Sussex, Falmer, Department of Engineering and Design, Brighton (United Kingdom); Miles, Kenneth A.; Critchley, Hugo D. [University of Sussex, Falmer, Clinical Imaging Sciences Centre, Brighton and Sussex Medical School, Brighton (United Kingdom); Young, Rupert C.D.; Chatwin, Christopher R. [University of Sussex, Falmer, Department of Engineering and Design, Brighton (United Kingdom); Gurling, Hugh M.D. [University College London, Department of Mental Health Sciences, London (United Kingdom)
2010-04-15
Three-dimensional (3-D) selective- and relative-scale texture analysis (TA) was applied to structural magnetic resonance (MR) brain images to quantify the presence of grey-matter (GM) and white-matter (WM) textural abnormalities associated with schizophrenia. Brain TA comprised volume filtration using the Laplacian of Gaussian filter to highlight fine, medium and coarse textures within GM and WM, followed by texture quantification. Relative TA (e.g. ratio of fine to medium) was also computed. T1-weighted MR whole-brain images from 32 participants with diagnosis of schizophrenia (n = 10) and healthy controls (n = 22) were examined. Five patients possessed marker alleles (SZ8) associated with schizophrenia on chromosome 8 in the pericentriolar material 1 gene while the remaining five had not inherited any of the alleles (SZ0). Filtered fine GM texture (mean grey-level intensity; MGI) most significantly differentiated schizophrenic patients from controls (P = 0.0058; area under the receiver-operating characteristic curve = 0.809, sensitivity = 90%, specificity = 70%). WM measurements did not distinguish the two groups. Filtered GM and WM textures (MGI) correlated with total GM and WM volume respectively. Medium-to-coarse GM entropy distinguished SZ0 from controls (P = 0.0069) while measures from SZ8 were intermediate between the two. 3-D TA of brain MR enables detection of subtle distributed morphological features associated with schizophrenia, determined partly by susceptibility genes. (orig.)
Concentration and Size Distribution of Particulate Matter in a Broiler House Ambient Air
Ismael Rodrigues Amador
2016-07-01
Full Text Available Atmospheric particulate matter (PM is an important constituent of ambient air. The determination of its concentration and size distribution in different environments is essential because of its ability to penetrate deeply into animal and human respiratory tract. In this study, air sampling was performed in a broiler house to estimate the concentration and size distribution of PM emitted along with its activities. Low-vol impactor (< 10 mm, cyclones (< 2.5 e < 1.0 mm, and Sioutas cascade impactor (> 2.5; 1.0 – 2.5; 0.50 – 1.0; 0.25 – 0.50; < 0.25 mm connected with membrane pumps were used. PM10 showed high concentration (209 - 533 mg m-3. PM2.5 and PM1.0 initially showed relatively low concentration (20.8 and 16.0 mg m-3 respectively with significantly increasing levels (412.9 and 344.8 mg m-3 respectively during the samplings. It was also possible to observe the contribution of fine particles. This was evidenced by the high correlation between PM2.5 and PM1.0 and by the profile of particle distribution in the Sioutas sampler. PM concentration levels are considered excessively high, with great potential to affect animal and human health. DOI: http://dx.doi.org/10.17807/orbital.v8i3.847
Richards, Emily E; Barnes, K L; Staudaher, S; Dale, D A; Braun, T T; Wavle, D C; Dalcanton, J J; Bullock, J S; Chandar, R
2016-01-01
We present a combination of new and archival neutral hydrogen (HI) observations and new ionized gas spectroscopic observations for sixteen galaxies in the statistically representative EDGES kinematic sample. HI rotation curves are derived from new and archival radio synthesis observations from the Very Large Array (VLA) as well as processed data products from the Westerbork Radio Synthesis Telescope (WSRT). The HI rotation curves are supplemented with optical spectroscopic integral field unit (IFU) observations using SparsePak on the WIYN 3.5 m telescope to constrain the central ionized gas kinematics in twelve galaxies. The full rotation curves of each galaxy are decomposed into baryonic and dark matter halo components using 3.6$\\mu$m images from the Spitzer Space Telescope for the stellar content, the neutral hydrogen data for the atomic gas component, and, when available, CO data from the literature for the molecular gas component. Differences in the inferred distribution of mass are illustrated under fixe...
The matter-energy intensity distribution in a quantum gravitational system
,
2016-01-01
In the framework of the method of constraint system quantization, a quantum gravitational system (QGS) with the maximally symmetric geometry is studied. The state vector of the QGS satisfies the set of wave equations which describes the time evolution of a quantum system in the space of quantum fields. It is shown that this state vector can be normalized to unity. The generalization of the wave equations to the domain of negative values of the cosmic scale factor is made. For the arrow of time from past to future, the state vector describes the QGS contracting for the negative values of the scale factor and expanding for its positive values. The intensity distributions of matter are calculated for two exactly solvable models of spatially closed and flat QGSs formed by dust and radiation. The analogies with the motion in time of minimum wave packet for spatially closed QGS and with the phenomenon of diffraction in optics for flat QGS are drawn.
Analysis of the Very Inner Milky Way Dark Matter Distribution and Gamma-Ray Signals
Gammaldi, V; Valenzuela, O; Gonzales-Morales, A X
2016-01-01
We study the Dark Matter (DM) distribution in the inner Galactic Center region (< 100 pc) considering the extrapolation of a compilation of density profiles appearing in state-of-the-art N-body + Hydrodynamics simulations of Milky Way-like galaxies. We consider (i) the DM-spike induced by the presence of the supermassive black hole and (ii) the effects of the scattering off of the DM particles in the spike by bulge stars. Some of these cases can provide the flux enhancement required to explain the cut-off in the HESS J1745-290 gamma-ray spectra as TeVDM as well as the spatial tail reported by HESS II at angular scales < 0.54 degree towards Sgr A*.
McDonald, John
2015-01-01
Warm dark matter (WDM) of order keV mass may be able to resolve the disagreement between structure formation in cold dark matter simulations and observations. The detailed properties of WDM will depend upon its energy distribution, in particular how it deviates from the thermal distribution usually assumed in WDM simulations. Here we focus on WDM production via the Ultra-Violet (UV) freeze-in mechanism, for the case of fermionic Higgs portal dark matter $\\psi$ produced the portal interaction $\\overline{\\psi}\\psi H^{\\dagger}H/\\Lambda$. We show that the reheating temperature must satisfy $T_{R} \\gtrsim 0.3 $ TeV in order to account for the observed dark matter density when $m_{\\psi} \\approx 2 $ keV, where the lower bound on $T_{R}$ corresponds to the limit where the fermion mass is entirely due to electroweak symmetry breaking via the portal interaction. The corresponding bound on the interaction scale is $\\Lambda \\gtrsim 1.5 \\times 10^{10}$ GeV. We introduce a new method to simplify the computation of the non-...
Miao, Yan-Gang [Nankai University, School of Physics, Tianjin (China); Chinese Academy of Sciences, State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, P.O. Box 2735, Beijing (China); CERN, PH-TH Division, Geneva 23 (Switzerland); Xu, Zhen-Ming [Nankai University, School of Physics, Tianjin (China)
2016-04-15
Considering non-Gaussian smeared matter distributions, we investigate the thermodynamic behaviors of the noncommutative high-dimensional Schwarzschild-Tangherlini anti-de Sitter black hole, and we obtain the condition for the existence of extreme black holes. We indicate that the Gaussian smeared matter distribution, which is a special case of non-Gaussian smeared matter distributions, is not applicable for the six- and higher-dimensional black holes due to the hoop conjecture. In particular, the phase transition is analyzed in detail. Moreover, we point out that the Maxwell equal area law holds for the noncommutative black hole whose Hawking temperature is within a specific range, but fails for one whose the Hawking temperature is beyond this range. (orig.)
Miao, Yan-Gang
2016-01-01
Considering non-Gaussian smeared matter distributions, we investigate thermodynamic behaviors of the noncommutative high-dimensional Schwarzschild-Tangherlini anti-de Sitter black hole, and obtain the condition for the existence of extreme black holes. We indicate that the Gaussian smeared matter distribution, which is a special case of non-Gaussian smeared matter distributions, is not applicable for the 6- and higher-dimensional black holes due to the hoop conjecture. In particular, the phase transition is analyzed in detail. Moreover, we point out that the Maxwell equal area law maintains for the noncommutative black hole with the Hawking temperature within a specific range, but fails with the Hawking temperature beyond this range.
M.A. Ebadian, Ph.D.; S.K. Dua, Ph.D., C.H.P.; Hillol Guha, Ph.D.
2001-01-01
During deactivation and decommissioning activities, thermal cutting tools, such as plasma torch, laser, and gasoline torch, are used to cut metals. These activities generate fumes, smoke and particulates. These airborne species of matter, called aerosols, may be inhaled if suitable respiratory protection is not used. Inhalation of the airborne metallic aerosols has been reported to cause ill health effects, such as acute respiratory syndrome and chromosome damage in lymphocytes. In the nuclear industry, metals may be contaminated with radioactive materials. Cutting these metals, as in size reduction of gloveboxes and tanks, produces high concentrations of airborne transuranic particles. Particles of the respirable size range (size < 10 {micro}m) deposit in various compartments of the respiratory tract, the fraction and the site in the respiratory tract depending on the size of the particles. The dose delivered to the respiratory tract depends on the size distribution of the airborne particulates (aerosols) and their concentration and radioactivity/toxicity. The concentration of airborne particulate matter in an environment is dependent upon the rate of their production and the ventilation rate. Thus, measuring aerosol size distribution and generation rate is important for (1) the assessment of inhalation exposures of workers, (2) the selection of respiratory protection equipment, and (3) the design of appropriate filtration systems. Size distribution of the aerosols generated during cutting of different metals by plasma torch was measured. Cutting rates of different metals, rate of generation of respirable mass, as well as the fraction of the released kerf that become respirable were determined. This report presents results of these studies. Measurements of the particles generated during cutting of metal plates with a plasma arc torch revealed the presence of particles with mass median aerodynamic diameters of particles close to 0.2 {micro}m, arising from
Leue, Anja; Cano Rodilla, Carmen; Beauducel, André
2015-01-01
Individuals typically evaluate whether their performance and the obtained feedback match. Previous research has shown that feedback negativity (FN) depends on outcome probability and feedback valence. It is, however, less clear to what extent previous effects of outcome probability on FN depend on self-evaluations of response correctness. Therefore, we investigated the effects of outcome probability on FN amplitude in a simple go/no-go task that allowed for the self-evaluation of response correctness. We also investigated effects of performance incompatibility and feedback valence. In a sample of N = 22 participants, outcome probability was manipulated by means of precues, feedback valence by means of monetary feedback, and performance incompatibility by means of feedback that induced a match versus mismatch with individuals' performance. We found that the 100% outcome probability condition induced a more negative FN following no-loss than the 50% outcome probability condition. The FN following loss was more negative in the 50% compared to the 100% outcome probability condition. Performance-incompatible loss resulted in a more negative FN than performance-compatible loss. Our results indicate that the self-evaluation of the correctness of responses should be taken into account when the effects of outcome probability and expectation mismatch on FN are investigated. PMID:26783525
Leue, Anja; Cano Rodilla, Carmen; Beauducel, André
2015-01-01
Individuals typically evaluate whether their performance and the obtained feedback match. Previous research has shown that feedback negativity (FN) depends on outcome probability and feedback valence. It is, however, less clear to what extent previous effects of outcome probability on FN depend on self-evaluations of response correctness. Therefore, we investigated the effects of outcome probability on FN amplitude in a simple go/no-go task that allowed for the self-evaluation of response correctness. We also investigated effects of performance incompatibility and feedback valence. In a sample of N = 22 participants, outcome probability was manipulated by means of precues, feedback valence by means of monetary feedback, and performance incompatibility by means of feedback that induced a match versus mismatch with individuals' performance. We found that the 100% outcome probability condition induced a more negative FN following no-loss than the 50% outcome probability condition. The FN following loss was more negative in the 50% compared to the 100% outcome probability condition. Performance-incompatible loss resulted in a more negative FN than performance-compatible loss. Our results indicate that the self-evaluation of the correctness of responses should be taken into account when the effects of outcome probability and expectation mismatch on FN are investigated.
Bhattacharyya, Pratip; Chakrabarti, Bikas K.
2008-01-01
We study different ways of determining the mean distance (r[subscript n]) between a reference point and its nth neighbour among random points distributed with uniform density in a D-dimensional Euclidean space. First, we present a heuristic method; though this method provides only a crude mathematical result, it shows a simple way of estimating…
J.W. Love
2017-04-01
Where FEC data were obtained with less sensitive counting techniques (i.e. McMaster 30 or 15 epg, zero-inflated distributions and their associated central tendency were the most appropriate and would be recommended to use, i.e. the arithmetic group mean divided by the proportion of non-zero counts present; otherwise apparent anthelmintic efficacy could be misrepresented.
Nolen, Matthew S.; Magoulick, Daniel D.; DiStefano, Robert J.; Imhoff, Emily M.; Wagner, Brian K.
2014-01-01
Crayfishes and other freshwater aquatic fauna are particularly at risk globally due to anthropogenic demand, manipulation and exploitation of freshwater resources and yet are often understudied. The Ozark faunal region of Missouri and Arkansas harbours a high level of aquatic biological diversity, especially in regard to endemic crayfishes. Three such endemics, Orconectes eupunctus,Orconectes marchandi and Cambarus hubbsi, are threatened by limited natural distribution and the invasions of Orconectes neglectus.
Turban, L
2016-01-01
The probability distribution of the number $s$ of distinct sites visited up to time $t$ by a random walk on the fully-connected lattice with $N$ sites is first obtained by solving the eigenvalue problem associated with the discrete master equation. Then, using generating function techniques, we compute the joint probability distribution of $s$ and $r$, where $r$ is the number of sites visited only once up to time $t$. Mean values, variances and covariance are deduced from the generating functions and their finite-size-scaling behaviour is studied. Introducing properly centered and scaled variables $u$ and $v$ for $r$ and $s$ and working in the scaling limit ($t\\to\\infty$, $N\\to\\infty$ with $w=t/N$ fixed) the joint probability density of $u$ and $v$ is shown to be a bivariate Gaussian density. It follows that the fluctuations of $r$ and $s$ around their mean values in a finite-size system are Gaussian in the scaling limit. The same type of finite-size scaling is expected to hold on periodic lattices above the ...
负二项分布概率最大值的性质%The Characters of the Probability Maximum Value for Negative Binomial Distribution
丁勇
2016-01-01
The character of probability maximum value for negative binomial distribution was explored. The probability maximum value for negative binomial distribution was a function of p and r, where p was the probability of success for each test, and r was the number of the first successful test. It was a mono-tonically increasing continuous function of p when r was given,only (r-1)/p was a integer, its derivative did not exist, and a monotone decreasing function of r when p was given.%负二项分布概率的最大值是每次试验成功的概率p和首次试验成功次数r的函数。对确定的r,该函数是p的单调上升的连续函数,仅当(r-1)/p是整数时不可导；对确定的p,该函数是r的单调下降函数。
The distribution and characteristics of suspended particulate matter in the Chukchi Sea
WANG Weiguo; FANG Jianyong; CHEN Lili; WU Risheng; YU Xingguang
2014-01-01
Samples taken from the Chukchi Sea (CS) during the 4th Chinese National Arctic Research Expedition, 2010, were analyzed to determine the content and composition of suspended particulate matter (SPM) to improve our understanding of the distribution, sources and control factors of the SPM there. The results show that the SPM in the water column is highest in the middle and near the bottom in the south and central-north CS, followed by that off the Alaskan coast and in Barrow Canyon. The SPM content is lowest in the central CS. Scanning electron microscope (SEM) analysis shows that the SPM in the south and central-north CS is composed mainly of diatoms, but the dominant species in those two areas are different. The SPM off the Alaskan coast and in Barrow Canyon is composed mainly of terrigenous material with few bio-skeletal clasts. The distribution of temperature and salinity and the correlation between diatom species in SPM indicate that the diatom dominant SPM in the south CS is from the Pacific Ocean via the Bering Strait in summer. The diatom dominant SPM in the central-north CS is also from Pacific water, which reaches the CS in winter. The SPM in the middle and near the bottom of the water column off the Alaskan coast and in Barrow Canyon is from Alaskan coastal water and terrigenous material transported by rivers in Alaska.
Natálio, Luís F.; Pardo, Juan C. F.; Machado, Glauco B. O.; Fortuna, Monique D.; Gallo, Deborah G.; Costa, Tânia M.
2017-01-01
Bioturbators play a key role in estuarine environments by modifying the availability of soil elements, which in turn may affect other organisms. Despite the importance of bioturbators, few studies have combined both field and laboratory experiments to explore the effects of bioturbators on estuarine soils. Herein, we assessed the bioturbation potential of fiddler crabs Leptuca leptodactyla and Leptuca uruguayensis in laboratory and field experiments, respectively. We evaluated whether the presence of fiddler crabs resulted in vertical transport of sediment, thereby altering organic matter (OM) distribution. Under laboratory conditions, the burrowing activity by L. leptodactyla increased the OM content in sediment surface. In the long-term field experiment with areas of inclusion and exclusion of L. uruguayensis, we did not observe influence of this fiddler crab in the vertical distribution of OM. Based on our results, we suggest that small fiddler crabs, such as the species used in these experiments, are potentially capable of alter their environment by transporting sediment and OM but such effects may be masked by environmental drivers and spatial heterogeneity under natural conditions. This phenomenon may be related to the small size of these species, which affects how much sediment is transported, along with the way OM interacts with biogeochemical and physical processes. Therefore, the net effect of these burrowing organisms is likely to be the result of a complex interaction with other environmental factors. In this sense, we highlight the importance of performing simultaneous field and laboratory experiments in order to better understanding the role of burrowing animals as bioturbators.
Sources and distribution of sedimentary organic matter along the Andong salt marsh, Hangzhou Bay
Yuan, Hong-Wei; Chen, Jian-Fang; Ye, Ying; Lou, Zhang-Hua; Jin, Ai-Min; Chen, Xue-Gang; Jiang, Zong-Pei; Lin, Yu-Shih; Chen, Chen-Tung Arthur; Loh, Pei Sun
2017-10-01
Lignin oxidation products, δ13C values, C/N ratios and particle size were used to investigate the sources, distribution and chemical stability of sedimentary organic matter (OM) along the Andong salt marsh located in the southwestern end of Hangzhou Bay, China. Terrestrial OM was highest at the upper marshes and decreased closer to the sea, and the distribution of sedimentary total organic carbon (TOC) was influenced mostly by particle size. Terrestrial OM with a C3 signature was the predominant source of sedimentary OM in the Spartina alterniflora-dominated salt marsh system. This means that aside from contributions from the local marsh plants, the Andong salt marsh received input mostly from the Qiantang River and the Changjiang Estuary. Transect C, which was situated nearer to the Qiantang River mouth, was most likely influenced by input from the Qiantang River. Likewise, a nearby creek could be transporting materials from Hangzhou Bay into Transect A (farther east than Transect C), as Transect A showed a signal resembling that of the Changjiang Estuary. The predominance of terrestrial OM in the Andong salt marsh despite overall reductions in sedimentary and terrestrial OM input from the rivers is most likely due to increased contributions of sedimentary and terrestrial OM from erosion. This study shows that lower salt marsh accretion due to the presence of reservoirs upstream may be counterbalanced by increased erosion from the surrounding coastal areas.
Dark and visible matter distribution in Coma cluster: theory vs observations
Brilenkov, Ruslan; Zhuk, Alexander
2015-01-01
We investigate dark and visible matter distribution in the Coma cluster in the case of the Navarro-Frenk-White (NFW) profile. A toy model where all galaxies in the cluster are concentrated inside a sphere of an effective radius $R_{eff}$ is considered. It enables to obtain the mean velocity dispersion as a function of $R_{eff}$. We show that, within the observation accuracy of the NFW parameters, the calculated value of $R_{eff}$ can be rather close to the observable cutoff of the galaxy distribution . Moreover, the comparison of our toy model with the observable data and simulations leads to the following preferable NFW parameters for the Coma cluster: $R_{200} \\approx 1.77\\,h^{-1} \\, \\mathrm{Mpc} = 2.61\\, \\mathrm{Mpc}$, $c=3\\div 4$ and $M_{200}= 1.29 h^{-1}\\times10^{15}M_{\\odot}$. In the Coma cluster the most of galaxies are concentrated inside a sphere of the effective radius $R_{eff}\\sim 3.7$ Mpc and the line-of-sight velocity dispersion is $1004\\, \\mathrm{km}\\, \\mathrm{s}^{-1}$.
Seasonal and spatial distribution of particulate organic matter in the Bay of Bengal
Fernandes, L.; Bhosle, N.B.; Matondkar, S.G.P.; Bhushan, R.
The temporal, spatial and depth related variation of suspended particulate organic matter (POM) in the Bay of Bengal are assessed in this paper. For this purpose, suspended particulate matter (SPM) samples were collected from eight depths (2 to 1000...
Dimitrov, Nikolay Krasimirov
2016-01-01
We have tested the performance of statistical extrapolation methods in predicting the extreme response of a multi-megawatt wind turbine generator. We have applied the peaks-over-threshold, block maxima and average conditional exceedance rates (ACER) methods for peaks extraction, combined with four...... levels, based on the assumption that the response tail is asymptotically Gumbel distributed. Example analyses were carried out, aimed at comparing the different methods, analysing the statistical uncertainties and identifying the factors, which are critical to the accuracy and reliability...
Concepts of probability theory
Pfeiffer, Paul E
1979-01-01
Using the Kolmogorov model, this intermediate-level text discusses random variables, probability distributions, mathematical expectation, random processes, more. For advanced undergraduates students of science, engineering, or math. Includes problems with answers and six appendixes. 1965 edition.
潘晓春
2012-01-01
It is necessary to describe the statistical properties of wind speed using three-parameter Weibull distribution for offshore wind energy resource assessment and utilization.According to the functional relation between parameters and probability-weighted moments（PWM）,the functions were fitted with the shape parameter and PWM using logistic curve.Two formulae of parameter estimation were studied out based on low-order insufficient and exceeding PWM.Accuracy test results show that these formulae had higher precision in large-scale range.Through comparative analysis with high-order PWM method for example,the author believes the low-order PWM methods in this paper are worth popularizing.%为便于进行海上风能资源评估与利用,采用三参数Weibull分布来描述风的统计特性是必要的。根据Weibull分布的三参数与概率权重矩（probability-weighted moment,PWM）的关系,应用罗吉斯蒂曲线拟合形状参数与PWM的函数关系,提出低阶不及PWM和超过PWM 2种参数估计方法。精度检验显示,文中方法在较大范围内均具有较高的精度。通过算例分析比较,认为提出的低阶PWM法值得推广使用。
The dark matter distribution of merging galaxy cluster PLCKG287.0+32.9 by weak lensing
Finner, Kyle; Jee, James; Dawson, William; Golovich, Nathan; Gruen, Daniel; Lemaux, Brian; Wittman, David M.
2017-01-01
The merging galaxy cluster, PLCKG287.0+32.9, is the second most significant detection of the Planck SZ survey. As part of a sample of galaxy clusters being investigated by the Merging Cluster Collaboration ($MC^2$), PLCKG287.0+32.9 is studied to further constrain dark matter properties and improve our understanding of galaxy cluster physics. The galaxy cluster hosts two megaparsec sized radio relics and a radio halo, evidence of its merger nature. The radio relics are located approximately one and three megaparsecs from the X-ray peak, requiring a somewhat complex merging scenario. A detailed study of the dark matter distribution will provide key information to constrain the merger scenario. With Subaru g- and r-band data, we perform a weak-lensing analysis and determine the dark matter distribution of the merging galaxy cluster. We fit a 2-parameter NFW profile to the tangential shear to quantify the mass. We find that the dark matter peak is aligned with the luminosity peak. Using the dark matter mass distribution, we discuss the significance of the merging constituents and their relation to the luminous emissions.
Gudder, Stanley P
2014-01-01
Quantum probability is a subtle blend of quantum mechanics and classical probability theory. Its important ideas can be traced to the pioneering work of Richard Feynman in his path integral formalism.Only recently have the concept and ideas of quantum probability been presented in a rigorous axiomatic framework, and this book provides a coherent and comprehensive exposition of this approach. It gives a unified treatment of operational statistics, generalized measure theory and the path integral formalism that can only be found in scattered research articles.The first two chapters survey the ne
Amin, Mustafa A; Blandford, Roger D
2007-01-01
The relationship between the metric and nonrelativistic matter distribution depends on the theory of gravity and additional fields, hence providing a possible way of distinguishing competing theories. With the assumption that the geometry and kinematics of the homogeneous universe have been measured, we present a procedure for understanding and testing the relationship between the cosmological matter distribution and metric perturbations (along with their respective evolution) using the ratio of the physical size of the perturbation to the size of the horizon as our small expansion parameter. We expand around Newtonian gravity on linear, sub-horizon scales with coefficient functions in front of the expansion parameter. Our framework relies on an ansatz which ensures that (i) the Poisson equation is recovered on small scales (ii) the metric variables (and any additional fields) are generated and supported by the nonrelativistic matter overdensity. The scales for which our framework is intended are small enough...
Earth Data Analysis Center, University of New Mexico — USFS, State Forestry, BLM, and DOI fire occurrence point locations from 1987 to 2008 were combined and converted into a fire occurrence probability or density grid...
Shiryaev, Albert N
2016-01-01
This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, the measure-theoretic foundations of probability theory, weak convergence of probability measures, and the central limit theorem. Many examples are discussed in detail, and there are a large number of exercises. The book is accessible to advanced undergraduates and can be used as a text for independent study. To accommodate the greatly expanded material in the third edition of Probability, the book is now divided into two volumes. This first volume contains updated references and substantial revisions of the first three chapters of the second edition. In particular, new material has been added on generating functions, the inclusion-exclusion principle, theorems on monotonic classes (relying on a detailed treatment of “π-λ” systems), and the fundamental theorems of mathematical statistics.
Vogelsberger, Mark; Helmi, Amina; Springel, Volker; White, Simon D. M.; Wang, Jie; Frenk, Carlos S.; Jenkins, Adrian; Ludlow, Aaron; Navarro, Julio F.
2009-01-01
We study predictions for dark matter (DM) phase-space structure near the Sun based on high-resolution simulations of six galaxy haloes taken from the Aquarius project. The local DM density distribution is predicted to be remarkably smooth; the density at the Sun differs from the mean over a best-fit
Vogelsberger, Mark; Helmi, Amina; Springel, Volker; White, Simon D. M.; Wang, Jie; Frenk, Carlos S.; Jenkins, Adrian; Ludlow, Aaron; Navarro, Julio F.
2008-01-01
We study predictions for dark matter phase-space structure near the Sun based on high-resolution simulations of six galaxy halos taken from the Aquarius Project. The local DM density distribution is predicted to be remarkably smooth; the density at the Sun differs from the mean over a best-fit ellip
Robert, Michael A; VanBergen, Saskia; Kleeman, Michael J; Jakober, Christopher A
2007-12-01
Size-resolved particulate matter (PM) emitted from light-duty gasoline vehicles (LDGVs) was characterized using filter-based samplers, cascade impactors, and scanning mobility particle size measurements in the summer 2002. Thirty LDGVs, with different engine and emissions control technologies (model years 1965-2003; odometer readings 1264-207,104 mi), were tested on a chassis dynamometer using the federal test procedure (FTP), the unified cycle (UC), and the correction cycle (CC). LDGV PM emissions were strongly correlated with vehicle age and emissions control technology. The oldest models had average ultrafine PM0.1 (0.056- to 0.1-microm aerodynamic diameter) and fine PM1.8 (emission rates of 9.6 mg/km and 213 mg/km, respectively. The newest vehicles had PM0.1 and PM1.8 emissions of 51 microg/km and 371 microg/km, respectively. Light duty trucks and sport utility vehicles had PM0.1 and PM1.8 emissions nearly double the corresponding emission rates from passenger cars. Higher PM emissions were associated with cold starts and hard accelerations. The FTP driving cycle produced the lowest emissions, followed by the UC and the CC. PM mass distributions peaked between 0.1- and 0.18-microm particle diameter for all vehicles except those emitting visible smoke, which peaked between 0.18 and 0.32 microm. The majority of the PM was composed of carbonaceous material, with only trace amounts of water-soluble ions. Elemental carbon (EC) and organic matter (OM) had similar size distributions, but the EC/OM ratio in LDGV exhaust particles was a strong function of the adopted emissions control technology and of vehicle maintenance. Exhaust from LDGV classes with lower PM emissions generally had higher EC/OM ratios. LDGVs adopting newer technologies were characterized by the highest EC/OM ratios, whereas OM dominated PM emissions from older vehicles. Driving cycles with cold starts and hard accelerations produced higher EC/OM ratios in ultrafine particles.
最概然分布理论的探究和剖析%A Summary of Background Knowledge of the Most Probable Distribution Theory
周昱; 魏蔚; 张艳燕; 马晓栋
2011-01-01
文章总结了最概然分布理论推导所需的一些基本概念和基本结论，在基本概念的表达、基本结论的理解和所有相关知识点的关联上提出一些体会与心得。%By summarizing basic concepts and conclusions in deriving the most probable distribution theory, this paper propose some comments and experiences of the expression of basic concepts, comprehend of basic conclusions, and relations of those pertinent knowle
Lexicographic Probability, Conditional Probability, and Nonstandard Probability
2009-11-11
the following conditions: CP1. µ(U |U) = 1 if U ∈ F ′. CP2 . µ(V1 ∪ V2 |U) = µ(V1 |U) + µ(V2 |U) if V1 ∩ V2 = ∅, U ∈ F ′, and V1, V2 ∈ F . CP3. µ(V |U...µ(V |X)× µ(X |U) if V ⊆ X ⊆ U , U,X ∈ F ′, V ∈ F . Note that it follows from CP1 and CP2 that µ(· |U) is a probability measure on (W,F) (and, in... CP2 hold. This is easily seen to determine µ. Moreover, µ vaciously satisfies CP3, since there do not exist distinct sets U and X in F ′ such that U
Veldkamp, T. I. E.; Wada, Y.; Aerts, J. C. J. H.; Ward, P. J.
2016-01-01
Changing hydro-climatic and socioeconomic conditions increasingly put pressure on fresh water resources and are expected to aggravate water scarcity conditions towards the future. Despite numerous calls for risk-based water scarcity assessments, a global-scale framework that includes UNISDR's definition of risk does not yet exist. This study provides a first step towards such a risk based assessment, applying a Gamma distribution to estimate water scarcity conditions at the global scale under historic and future conditions, using multiple climate change and population growth scenarios. Our study highlights that water scarcity risk, expressed in terms of expected annual exposed population, increases given all future scenarios, up to greater than 56.2% of the global population in 2080. Looking at the drivers of risk, we find that population growth outweigh the impacts of climate change at global and regional scales. Using a risk-based method to assess water scarcity, we show the results to be less sensitive than traditional water scarcity assessments to the use of fixed threshold to represent different levels of water scarcity. This becomes especially important when moving from global to local scales, whereby deviations increase up to 50% of estimated risk levels.
Veldkamp, T. I. E.; Wada, Y.; Aerts, J. C. J. H.; Ward, P. J.
2016-01-01
Changing hydro-climatic and socioeconomic conditions increasingly put pressure on fresh water resources and are expected to aggravate water scarcity conditions towards the future. Despite numerous calls for risk-based water scarcity assessments, a global-scale framework that includes UNISDR's definition of risk does not yet exist. This study provides a first step towards such a risk based assessment, applying a Gamma distribution to estimate water scarcity conditions at the global scale under historic and future conditions, using multiple climate change and population growth scenarios. Our study highlights that water scarcity risk, expressed in terms of expected annual exposed population, increases given all future scenarios, up to greater than 56.2% of the global population in 2080. Looking at the drivers of risk, we find that population growth outweigh the impacts of climate change at global and regional scales. Using a risk-based method to assess water scarcity, we show the results to be less sensitive than traditional water scarcity assessments to the use of fixed threshold to represent different levels of water scarcity. This becomes especially important when moving from global to local scales, whereby deviations increase up to 50% of estimated risk levels.
Hartmann, Stephan
2011-01-01
Many results of modern physics--those of quantum mechanics, for instance--come in a probabilistic guise. But what do probabilistic statements in physics mean? Are probabilities matters of objective fact and part of the furniture of the world, as objectivists think? Or do they only express ignorance or belief, as Bayesians suggest? And how are probabilistic hypotheses justified and supported by empirical evidence? Finally, what does the probabilistic nature of physics imply for our understanding of the world? This volume is the first to provide a philosophical appraisal of probabilities in all of physics. Its main aim is to make sense of probabilistic statements as they occur in the various physical theories and models and to provide a plausible epistemology and metaphysics of probabilities. The essays collected here consider statistical physics, probabilistic modelling, and quantum mechanics, and critically assess the merits and disadvantages of objectivist and subjectivist views of probabilities in these fie...
The source and distribution of thermogenic dissolved organic matter in the ocean
Dittmar, T.; Suryaputra, I. G. N. A.; Paeng, J.
2009-04-01
Thermogenic organic matter (ThOM) is abundant in the environment. ThOM is produced at elevated temperature and pressure in deep sediments and earth's crust, and it is also a residue of fossil fuel and biomass burning ("black carbon"). Because of its refractory character, it accumulates in soils and sediments and, therefore, may sequester carbon from active cycles. It was hypothesized that a significant component of marine dissolved organic matter (DOM) might be thermogenic. Here we present a detailed data set on the distribution of thermogenic DOM in major water masses of the deep and surface ocean. In addition, several potential sources of thermogenic DOM to the ocean were investigated: active seeps of brine fluids in the deep Gulf of Mexico, rivers, estuaries and submarine groundwaters. Studies on deep-sea hydrothermal vents and aerosol deposition are ongoing. All DOM samples were isolated from seawater via solid phase extraction (SPE-DOM). ThOM was quantified in the extracts as benzene-polycarboxylic acids (BPCAs) after nitric acid oxidation via high-performance liquid chromatography and diode array detection (HPLC-DAD). BPCAs are produced exclusively from fused ring systems and are therefore unambiguous molecular tracers for ThOM. In addition to BPCA determination, the molecular composition and structure of ThOM was characterized in detail via ultrahigh resolution mass spectrometry (FT-ICR-MS). All marine and river DOM samples yielded significant amounts of BPCAs. The cold seep system in the deep Gulf of Mexico, but also black water rivers (like the Suwannee River) were particularly rich in ThOM. Up to 10% of total dissolved organic carbon was thermogenic in both systems. The most abundant BPCA was benzene-pentacarboxylic acid (B5CA). The molecular composition of BPCAs and the FT-ICR-MS data indicate a relatively small number (5-8) of fused aromatic rings per molecule. Overall, the molecular BPCA patterns were very similar independent of the source of Th
Gian Paolo Beretta
2008-08-01
Full Text Available A rate equation for a discrete probability distribution is discussed as a route to describe smooth relaxation towards the maximum entropy distribution compatible at all times with one or more linear constraints. The resulting dynamics follows the path of steepest entropy ascent compatible with the constraints. The rate equation is consistent with the Onsager theorem of reciprocity and the fluctuation-dissipation theorem. The mathematical formalism was originally developed to obtain a quantum theoretical unification of mechanics and thermodinamics. It is presented here in a general, non-quantal formulation as a part of an effort to develop tools for the phenomenological treatment of non-equilibrium problems with applications in engineering, biology, sociology, and economics. The rate equation is also extended to include the case of assigned time-dependences of the constraints and the entropy, such as for modeling non-equilibrium energy and entropy exchanges.
Beretta, Gian P.
2008-09-01
A rate equation for a discrete probability distribution is discussed as a route to describe smooth relaxation towards the maximum entropy distribution compatible at all times with one or more linear constraints. The resulting dynamics follows the path of steepest entropy ascent compatible with the constraints. The rate equation is consistent with the Onsager theorem of reciprocity and the fluctuation-dissipation theorem. The mathematical formalism was originally developed to obtain a quantum theoretical unification of mechanics and thermodinamics. It is presented here in a general, non-quantal formulation as a part of an effort to develop tools for the phenomenological treatment of non-equilibrium problems with applications in engineering, biology, sociology, and economics. The rate equation is also extended to include the case of assigned time-dependences of the constraints and the entropy, such as for modeling non-equilibrium energy and entropy exchanges.
Zhang, Guang-Le; Liu, Jian-Guo; Kan, Rui-Feng; Xu, Zhen-Yu
2014-12-01
Line-of-sight tunable-diode-laser absorption spectroscopy (LOS-TDLAS) with multiple absorption lines is introduced for non-uniform temperature measurement. Temperature binning method combined with Gauss—Seidel iteration method is used to measure temperature probability distribution function (PDF) along the line-of-sight (LOS). Through 100 simulated measurements, the variation of measurement accuracy is investigated with the number of absorption lines, the number of temperature bins and the magnitude of temperature non-uniformity. A field model with 2-T temperature distribution and 15 well-selected absorption lines are used for the simulation study. The Gauss—Seidel iteration method is discussed for its reliability. The investigation result about the variation of measurement accuracy with the number of temperature bins is different from the previous research results.
Kang, Xi; Wang, Peng, E-mail: kangxi@pmo.ac.cn [Purple Mountain Observatory, the Partner Group of MPI für Astronomie, 2 West Beijing Road, Nanjing 210008 (China)
2015-11-01
The distribution of galaxies displays anisotropy on different scales and it is often referred to as galaxy alignment. To understand the origin of galaxy alignments on small scales, one must investigate how galaxies were accreted in the early universe and quantify their primordial anisotropy at the time of accretion. In this paper we use N-body simulations to investigate the accretion of subhalos, focusing on their alignment with halo shape and the orientation of mass distribution on the large scale, defined using the Hessian matrix of the density field. The large/small (e1/e3) eigenvalues of the Hessian matrix define the fast/slow collapse direction of matter on the large scale. We find that: (1) the halo major axis is well aligned with the e3 (slow collapse) direction, and it is stronger for massive halos; (2) subhalos are predominantly accreted along the major axis of the host halo, and the alignment increases with the host halo mass. Most importantly, this alignment is universal; (3) accretion of subhalos with respect to the e3 direction is not universal. In massive halos, subhalos are accreted along the e3 (even more strongly than the alignment with the halo major axis), but in low-mass halos subhalos are accreted perpendicular to e3. The transitional mass is lower at high redshift. The last result well explains the puzzling correlation (both in recent observations and simulations) that massive galaxies/halos have their spin perpendicular to the filament, and the spin of low-mass galaxies/halos is slightly aligned with the filament, under the assumption that the orbital angular momentum of subhalos is converted to halo spin.
Kang, Xi; Wang, Peng
2015-11-01
The distribution of galaxies displays anisotropy on different scales and it is often referred to as galaxy alignment. To understand the origin of galaxy alignments on small scales, one must investigate how galaxies were accreted in the early universe and quantify their primordial anisotropy at the time of accretion. In this paper we use N-body simulations to investigate the accretion of subhalos, focusing on their alignment with halo shape and the orientation of mass distribution on the large scale, defined using the Hessian matrix of the density field. The large/small (e1/e3) eigenvalues of the Hessian matrix define the fast/slow collapse direction of matter on the large scale. We find that: (1) the halo major axis is well aligned with the e3 (slow collapse) direction, and it is stronger for massive halos; (2) subhalos are predominantly accreted along the major axis of the host halo, and the alignment increases with the host halo mass. Most importantly, this alignment is universal; (3) accretion of subhalos with respect to the e3 direction is not universal. In massive halos, subhalos are accreted along the e3 (even more strongly than the alignment with the halo major axis), but in low-mass halos subhalos are accreted perpendicular to e3. The transitional mass is lower at high redshift. The last result well explains the puzzling correlation (both in recent observations and simulations) that massive galaxies/halos have their spin perpendicular to the filament, and the spin of low-mass galaxies/halos is slightly aligned with the filament, under the assumption that the orbital angular momentum of subhalos is converted to halo spin.
S. Han
2015-05-01
Full Text Available Heavy regional particulate matter (PM pollution in China has resulted in an important and urgent need for joint control actions among cities. It's advisable to improve the understanding of regional background concentration of PM for the development of efficient and effective joint control policies. With the increase of vertical height the influence of source emission on local air quality is weakening, but the characteristics of regional pollution gradually become obvious. A method to estimate regional background PM concentration is proposed in this paper, based on the vertical variation periodic characteristics of the atmospheric boundary layer structure and particle mass concentration, as well as the vertical distribution of particle size, chemical composition and pollution source apportionment. According to the method, the averaged regional background PM2.5 concentration, being extracted from the original time series in Tianjin, was 40.0 ± 20.2, 63.6 ± 16.9 and 53.2 ± 11.1 μg m−3, respectively, in July, August and September.
A toy model for the large-scale matter distribution in the Universe
Leigh, Nathan W C
2016-01-01
We consider a toy model for the large-scale matter distribution in a static Universe. The model assumes a mass spectrum dN$_{\\rm i}$/dm$_{\\rm i}$ $=$ $\\beta$m$_{\\rm i}^{-\\alpha}$ (where $\\alpha$ and $\\beta$ are both positive constants) for low-mass particles with m$_{\\rm i}$ $\\ll$ M$_{\\rm P}$, where M$_{\\rm P}$ is the Planck mass, and a particle mass-wavelength relation of the form $\\lambda_{\\rm i} =$ $\\hbar$/$\\delta_{\\rm i}$m$_{\\rm i}$c, where $\\delta_{\\rm i} =$ $\\eta$m$_{\\rm i}^{\\gamma}$ and $\\eta$ and $\\gamma$ are both constants. Our model mainly concerns particles with masses far below those in the Standard Model of Particle Physics. We assume that, for such low-mass particles, locality can only be defined on large spatial scales, comparable to or exceeding the particle wavelengths. We use our model to derive the cosmological redshift characteristic of the Standard Model of Cosmology, which becomes a gravitational redshift in our model. We compare the results of our model to empirical data and show that, ...
Vlah, Zvonimir; Okumura, Teppei; Desjacques, Vincent
2013-01-01
Numerical simulations show that redshift space distortions (RSD) introduce strong scale dependence in the power spectra of halos, with ten percent deviations relative to linear theory predictions even on relatively large scales (k<0.1h/Mpc) and even in the absence of satellites (which induce Fingers-of-God, FoG, effects). If unmodeled these effects prevent one from extracting cosmological information from RSD surveys. In this paper we use perturbation theory (PT) and halo biasing model and apply it to the distribution function approach to RSD, in which RSD is decomposed into several correlators of density weighted velocity moments. We model each of these correlators using PT and compare the results to simulations over a wide range of halo masses and redshifts. We find that with an introduction of a physically motivated halo biasing, and using dark matter power spectra from simulations, we can reproduce the simulation results at a percent level on scales up to k~0.15h/Mpc at z=0, without the need to have fr...
Vlah, Zvonimir; McDonald, Patrick; Okumura, Teppei; Baldauf, Tobias
2012-01-01
We develop a perturbative approach to redshift space distortions (RSD) using the phase space distribution function approach and apply it to the dark matter redshift space power spectrum and its moments. RSD can be written as a sum over density weighted velocity moments correlators, with the lowest order being density, momentum density and stress energy density. We use standard and extended perturbation theory (PT) to determine their auto and cross correlators, comparing them to N-body simulations. We show which of the terms can be modeled well with the standard PT and which need additional terms that include higher order corrections which cannot be modeled in PT. Most of these additional terms are related to the small scale velocity dispersion effects, the so called finger of god (FoG) effects, which affect some, but not all, of the terms in this expansion, and which can be approximately modeled using a simple physically motivated ansatz such as the halo model. We point out that there are several velocity dis...
Portail, Matthieu; Wegg, Christopher; Ness, Melissa
2016-01-01
We construct a large set of dynamical models of the galactic bulge, bar and inner disk using the Made-to-Measure method. Our models are constrained to match the red clump giant density from a combination of the VVV, UKIDSS and 2MASS infrared surveys together with stellar kinematics in the bulge from the BRAVA and OGLE surveys, and in the entire bar region from the ARGOS survey. We are able to recover the bar pattern speed and the stellar and dark matter mass distributions in the bar region, thus recovering the entire galactic effective potential. We find a bar pattern speed of $39.0 \\pm 3.5 \\,\\rm{km\\,s^{-1}\\,kpc^{-1}}$, placing the bar corotation radius at $6.1 \\pm 0.5 \\, \\rm{kpc}$ and making the Milky Way bar a typical fast rotator. We evaluate the stellar mass of the long bar and bulge structure to be $M_{\\rm{bar/bulge}} = 1.88 \\pm 0.12 \\times 10^{10} \\, \\rm{M}_{\\odot}$, larger than the mass of disk in the bar region, $M_{\\rm{inner\\ disk}} = 1.29\\pm0.12 \\times 10^{10} \\, \\rm{M}_{\\odot}$. The total dynamical...
Rojas-Nandayapa, Leonardo
Tail probabilities of sums of heavy-tailed random variables are of a major importance in various branches of Applied Probability, such as Risk Theory, Queueing Theory, Financial Management, and are subject to intense research nowadays. To understand their relevance one just needs to think...... of insurance companies facing losses due to natural disasters, banks seeking protection against huge losses, failures in expensive and sophisticated systems or loss of valuable information in electronic systems. The main difficulty when dealing with this kind of problems is the unavailability of a closed...
Dark matter in the Milky Way, II. the HI gas distribution as a tracer of the gravitational potential
Kalberla, P M W; Kerp, J; Haud, U
2007-01-01
Context. Gas within a galaxy is forced to establish pressure balance against gravitational forces. The shape of an unperturbed gaseous disk can be used to constrain dark matter models. Aims. We derive the 3-D HI volume density distribution for the Milky Way out to a galactocentric radius of 40 kpc and a height of 20 kpc to constrain the Galactic mass distribution. Methods. We used the Leiden/Argentine/Bonn all sky 21-cm line survey. The transformation from brightness temperatures to densities depends on the rotation curve. We explored several models, reflecting different dark matter distributions. Each of these models was set up to solve the combined Poisson-Boltzmann equation in a self-consistent way and optimized to reproduce the observed flaring. Results. Besides a massive extended halo of M ~ 1.8 10^{12} Msun, we find a self-gravitating dark matter disk with M=2 to 3 10^{11} Msun, including a dark matter ring at 13 < R < 18.5 kpc with M = 2.2 to 2.8 10^{10} Msun. The existence of the ring was previo...
Butsky, Iryna; Dutton, Aaron A; Wang, Liang; Stinson, Greg S; Penzo, Camilla; Kang, Xi; Keller, Ben W; Wadsley, James
2015-01-01
We show the effect of galaxy formation on the dark matter (DM) distribution across a wide range of halo masses. We focus on how baryon physics changes the dark matter halo shape, the so called "pseudo phase-space density distribution" and the velocity distribution within the virial radius, Rvir and in the solar neighborhood. This study is based on the NIHAO galaxy formation simulations, a large suite of cosmological zoom-in simulations. The galaxies reproduce key properties of observed galaxies, and hence offer unique insight into how baryons change the dark matter morphology and kinematics. When compared to dark matter only simulations, the NIHAO haloes have similar shapes at Rvir, but are substantially rounder inside ~0.1 Rvir. In DM-only simulations the inner halo has a minor-to-major axis ratio of c/a~0.5. In hydro simulations c/a increases with halo mass and integrated star formation efficiency, reaching ~0.8 at the Milky Way mass, reconciling a long-standing conflict between observations and DM only sim...
SHA Yun-dong; GUO Xiao-peng; LIAO Lian-fang; XIE Li-juan
2011-01-01
As to the sonic fatigue problem of an aero-engine combustor liner structure under the random acoustic loadings, an effective method for predicting the fatigue life of a structure under random loadings was studied. Firstly, the probability distribution of Von Mises stress of thin-walled structure under random loadings was studied, analysis suggested that probability density function of Von Mises stress process accord approximately with two-parameter Weibull distribution. The formula for calculating Weibull parameters were given. Based on the Miner linear theory, the method for predicting the random sonic fatigue life based on the stress probability density was developed, and the model for fatigue life prediction was constructed. As an example, an aero-engine combustor liner structure was considered. The power spectrum density (PSD) of the vibrational stress response was calculated by using the coupled FEM/BEM (finite element method/boundary element method) model, the fatigue life was estimated by using the constructed model. And considering the influence of the wide frequency band, the calculated results were modified. Comparetive analysis shows that the estimated results of sonic fatigue of the combustor liner structure by using Weibull distribution of Von Mises stress are more conservative than using Dirlik distribution to some extend. The results show that the methods presented in this paper are practical for the random fatigue life analysis of the aeronautical thin-walled structures.
Pavlov, Alexey K.; Stedmon, Colin A.; Semushin, Andrey V.; Martma, Tõnu; Ivanov, Boris V.; Kowalczuk, Piotr; Granskog, Mats A.
2016-05-01
The White Sea is a semi-enclosed Arctic marginal sea receiving a significant loading of freshwater (225-231 km3 yr-1 equaling an annual runoff yield of 2.5 m) and dissolved organic matter (DOM) from river run-off. We report discharge weighed values of stable oxygen isotope ratios (δ18O) of -14.0‰ in Northern Dvina river for the period 10 May-12 October 2012. We found a significant linear relationship between salinity (S) and δ18O (δ18O=-17.66±0.58+0.52±0.02×S; R2=0.96, N=162), which indicates a dominant contribution of river water to the freshwater budget and little influence of sea ice formation or melt. No apparent brine additions from sea-ice formation is evident in the White Sea deep waters as seen from a joint analysis of temperature (T), S, δ18O and aCDOM(350) data, confirming previous suggestions about strong tidal induced vertical mixing in winter being the likely source of the deep waters. We investigated properties and distribution of colored dissolved organic matter (CDOM) and dissolved organic carbon (DOC) in the White Sea basin and coastal areas in summer. We found contrasting DOM properties in the inflowing Barents Sea waters and White Sea waters influenced by terrestrial runoff. Values of absorption by CDOM at 350 nm (aCDOM(350)) and DOC (exceeding 10 m-1 and 550 μmol l-1, respectively) in surface waters of the White Sea basin are higher compared to other river-influenced coastal Arctic domains. Linear relationship between S and CDOM absorption, and S and DOC (DOC=959.21±52.99-25.80±1.79×S; R2=0.85; N=154) concentrations suggests conservative mixing of DOM in the White Sea. The strongest linear correlation between CDOM absorption and DOC was found in the ultraviolet (DOC=56.31±2.76+9.13±0.15×aCDOM(254); R2=0.99; N=155), which provides an easy and robust tool to trace DOC using CDOM absorption measurements as well as remote sensing algorithms. Deviations from this linear relationship in surface waters likely indicate contribution from
Yao, Ling; Lu, Ning
2014-01-01
the high-level regions distribute in Guangdong, Shanghai, and Tianjin, while the latter in Hebei, Chongqing, and Shandong provinces. Further studies may consider optimizing concentration estimation model and use it to discuss the effects of particulate matters on human health.
Probability Distribution of Airport Capacity Affected by Weather%天气影响的机场容量概率分布
张静; 徐肖豪; 王飞
2011-01-01
Weather is a major factor causing airport capacity reduction. To reflect the impact of the weather on capacity, the n-phase arrival capacity distribution model is established. The historical weather data are translated into the capacity probability distribution for each weather type through weather type decision tree. According to the capacity probability distribution of each weather type, the probabilistic weather forecasts are translated into probabilistic capacity forecasts by using total probability formula. Weather forecasts of a day are simulated according to the 5-year airport hourly data, and a set of the n-phase arrival capacity distribution based on the weather forecasts is obtained. Simulation results indicate that inclement weather forecasts at different time can be translated into a set of stochastic capacity forecasts, which thus meeting the needs of the real time traffic flow management.%天气是影响机场容量下降的主要因素,为了反映预测天气对容量的影响,建立了n-阶段到达容量分布模型.通过天气类型决策树将历史天气数据转换为每种天气类型的到达容量概率分布.根据天气类型的容量概率分布,用全概公式将概率天气预测转换为概率容量预测.基于5年的机场小时天气数据,对某一日的预测天气进行算例仿真,得到了一组基于预测天气的n-阶段容量概率分布.结果表明,n-阶段容量分布模型能够将机场不同时段的预测恶劣天气转换为预测随机容量,从而满足实时流量管理的需要.
Rawlins, Barry G.; Wragg, Joanna; Reinhard, Christina; Atwood, Robert C.; Houston, Alasdair; Lark, R. Murray; Rudolph, Sebastian
2016-12-01
The spatial distribution and accessibility of organic matter (OM) to soil microbes in aggregates - determined by the fine-scale, 3-D distribution of OM, pores and mineral phases - may be an important control on the magnitude of soil heterotrophic respiration (SHR). Attempts to model SHR on fine scales requires data on the transition probabilities between adjacent pore space and soil OM, a measure of microbial accessibility to the latter. We used a combination of osmium staining and synchrotron X-ray computed tomography (CT) to determine the 3-D (voxel) distribution of these three phases (scale 6.6 µm) throughout nine aggregates taken from a single soil core (range of organic carbon (OC) concentrations: 4.2-7.7 %). Prior to the synchrotron analyses we had measured the magnitude of SHR for each aggregate over 24 h under controlled conditions (moisture content and temperature). We test the hypothesis that larger magnitudes of SHR will be observed in aggregates with (i) shorter length scales of OM variation (more aerobic microsites) and (ii) larger transition probabilities between OM and pore voxels. After scaling to their OC concentrations, there was a 6-fold variation in the magnitude of SHR for the nine aggregates. The distribution of pore diameters and tortuosity index values for pore branches was similar for each of the nine aggregates. The Pearson correlation between aggregate surface area (normalized by aggregate volume) and normalized headspace C gas concentration was both positive and reasonably large (r = 0.44), suggesting that the former may be a factor that influences SHR. The overall transition probabilities between OM and pore voxels were between 0.07 and 0.17, smaller than those used in previous simulation studies. We computed the length scales over which OM, pore and mineral phases vary within each aggregate using 3-D indicator variograms. The median range of models fitted to variograms of OM varied between 38 and 175 µm and was generally larger than
Landau-Zener Probability Reviewed
Valencia, C
2008-01-01
We examine the survival probability for neutrino propagation through matter with variable density. We present a new method to calculate the level-crossing probability that differs from Landau's method by constant factor, which is relevant in the interpretation of neutrino flux from supernova explosion.
Velliscig, Marco; Schaye, Joop; Bower, Richard G; Crain, Robert A; van Daalen, Marcel P; Vecchia, Claudio Dalla; Frenk, Carlos S; Furlong, Michelle; McCarthy, Ian G; Schaller, Matthieu; Theuns, Tom
2015-01-01
We report the alignment and shape of dark matter, stellar, and hot gas distributions in the EAGLE and cosmo-OWLS simulations. The combination of these state-of-the-art hydro-cosmological simulations enables us to span four orders of magnitude in halo mass ($11 < log_{10}(M_{200}/ [h^{-1}M_\\odot]) < 15$), a wide radial range ($-2.3 < log_{10}(r/[h^{-1}Mpc ]) < 1.3$) and redshifts $0 < z < 1$. The shape parameters of the dark matter, stellar and hot gas distributions follow qualitatively similar trends: they become more aspherical (and triaxial) with increasing halo mass, radius and redshift. We measure the misalignment of the baryonic components (hot gas and stars) of galaxies with their host halo as a function of halo mass, radius, redshift, and galaxy type (centrals vs satellites and early- vs late-type). Overall, galaxies align well with local distribution of the total (mostly dark) matter. However, the stellar distributions on galactic scales exhibit a median misalignment of about 45-50 d...
Probability Theory without Bayes' Rule
Rodriques, Samuel G.
2014-01-01
Within the Kolmogorov theory of probability, Bayes' rule allows one to perform statistical inference by relating conditional probabilities to unconditional probabilities. As we show here, however, there is a continuous set of alternative inference rules that yield the same results, and that may have computational or practical advantages for certain problems. We formulate generalized axioms for probability theory, according to which the reverse conditional probability distribution P(B|A) is no...
Shan, Chung-Lin
2014-01-01
In this paper, we extended our earlier work on the reconstruction of the (time-averaged) one-dimensional velocity distribution of Galactic Weakly Interacting Massive Particles (WIMPs) and introduce the Bayesian fitting procedure to the theoretically predicted velocity distribution functions. In this reconstruction process, the (rough) velocity distribution reconstructed by using raw data from direct Dark Matter detection experiments directly, i.e. measured recoil energies, with one or more different target materials, has been used as "reconstructed-input" information. By assuming a fitting velocity distribution function and scanning the parameter space based on the Bayesian analysis, the astronomical characteristic parameters, e.g. the Solar and Earth's orbital velocities, will be pinned down as the output results. Our Monte-Carlo simulations show that this Bayesian scanning procedure could reconstruct the true (input) WIMP velocity distribution function pretty precisely with negligible systematic deviations ...
The distribution of dark matter in the Milky Way's disk
Pillepich, Annalisa; Kuhlen, Michael; Madau, Piero [Department of Astronomy and Astrophysics, University of California Santa Cruz, 1156 High Street, Santa Cruz, CA 95064 (United States); Guedes, Javiera, E-mail: apillepich@cfa.harvard.edu [ETH Zurich, Institute for Astronomy, Wolfgang-Pauli-Strasse 27, CH-8049 Zurich (Switzerland)
2014-04-01
We present an analysis of the effects of dissipational baryonic physics on the local dark matter (DM) distribution at the location of the Sun, with an emphasis on the consequences for direct detection experiments. Our work is based on a comparative analysis of two cosmological simulations with identical initial conditions of a Milky Way halo, one of which (Eris) is a full hydrodynamic simulation and the other (ErisDark) is a DM-only one. We find that in Eris two distinct processes lead to a 30% enhancement of DM in the disk plane at the location of the Sun: the accretion and disruption of satellites resulting in a DM component with net angular momentum, and the contraction of baryons pulling the DM into the disk plane without forcing it to co-rotate. Owing to its particularly quiescent merger history for dark halos of Milky Way mass, the co-rotating dark disk in Eris is less massive than what has been suggested by previous work, contributing only 9% of the local DM density. Yet, since the simulation results in a realistic Milky Way analog galaxy, its DM halo provides a plausible alternative to the Maxwellian standard halo model (SHM) commonly used in direct detection analyses. The speed distribution in Eris is broadened and shifted to higher speeds, compared to its DM-only twin simulation ErisDark. At high speeds f(v) falls more steeply in Eris than in ErisDark or the SHM, easing the tension between recent results from the CDMS-II and XENON100 experiments. The non-Maxwellian aspects of f(v) are still present, but much less pronounced in Eris than in the DM-only runs. The weak dark disk increases the time-averaged scattering rate by only a few percent at low recoil energies. On the high velocity tail, however, the increase in typical speeds due to baryonic contraction results in strongly enhanced mean scattering rates compared to ErisDark, although they are still suppressed compared to the SHM. Similar trends are seen regarding the amplitude of the annual modulation
Acharya, Shiba Shankar; Panigrahi, Mruganka K.
2016-09-01
The Eastern Arabian Shelf (EAS) is a region of high primary production and a part of an intense oxygen minimum zone as well. The EAS is a zone of significant accumulation of organic matter that is ascribable to either the prevalent anoxic condition or high primary productivity, There has been a considerable amount of debate on the dominant factor responsible for the enrichment of organic matter in the sediments in EAS. The present study is an attempt to resolve the issue through robust geostatistical analysis of published and unpublished data. Results of Empirical Bayesian kriging (EBK) and geographically weighted regression (GWR) of available data help to get a refined distribution of organic carbon and phosphorus in the Eastern Arabian Shelf as compared to the earlier known distribution patterns. The primary productivity, evaluated through the latest satellite dataset using Vertically Generalized Production Model, does not show any similarity with the distribution pattern of either organic carbon (Corg) or phosphorus, that was determined based on the in situ data. The negative correlations of primary production with Corg (r=-0.14) and P (r=-0.4) indicate that primary productivity is the most unlikely modulator of organic matter accumulation in the EAS. The negative correlation of bottom water oxygen concentration with Corg (r=-0.39) and Ti-normalized fraction of organic carbon (r=-0.56) indicates that anoxia plays a major role in the preservation of organic matter in the EAS. The mass accumulation rates of Corg and phosphorus show a strong dependency on sedimentation rate (r>0.88), which indicates that the accumulation rate of sediments outweighs the other depositional parameters in controlling the accumulation of organic matter in the EAS.
Hughes, Annie; Schinnerer, Eva; Colombo, Dario; Pety, Jerome; Leroy, Adam K; Dobbs, Clare L; Garcia-Burillo, Santiago; Thompson, Todd A; Dumas, Gaelle; Schuster, Karl F; Kramer, Carsten
2013-01-01
We analyse the distribution of CO brightness temperature and integrated intensity in M51 at ~40 pc resolution using new CO data from the Plateau de Bure Arcsecond Whirlpool Survey (PAWS). We present probability distribution functions (PDFs) of the CO emission within the PAWS field, which covers the inner 11 x 7 kpc of M51. We find variations in the shape of CO PDFs within different M51 environments, and between M51 and M33 and the Large Magellanic Cloud (LMC). Globally, the PDFs for the inner disk of M51 can be represented by narrow lognormal functions that cover 1 to 2 orders of magnitude in CO brightness and integrated intensity. The PDFs for M33 and the LMC are narrower and peak at lower CO intensities. However, the CO PDFs for different dynamical environments within the PAWS field depart from the shape of the global distribution. The PDFs for the interarm region are approximately lognormal, but in the spiral arms and central region of M51, they exhibit diverse shapes with a significant excess of bright CO...
Probability Distributions over Cryptographic Protocols
2009-06-01
exception. Cryptyc integrates use of pattern- matching in the spi calculus framework , which in turn allows the specification of nested cryptographic...programs too: the metaheuristic search for security protocols,” Information and Software Technology, vol. 43, pp. 891– 904, December 2001. 131 [9] X
Dark matter distribution in the universe and ultra-high energy cosmic rays
Blasi, P
2000-01-01
Two of the greatest mysteries of modern physics are the origin of the dark matter in the universe and the nature of the highest energy particles in the cosmic ray spectrum. We discuss here possible direct and indirect connections between these two problems, with particular attention to two cases: in the first we study the local clustering of possible sources of ultra-high energy cosmic rays (UHECRs) driven by the local dark matter overdensity. In the second case we study the possibility that UHECRs are directly generated by the decay of weakly unstable super heavy dark matter.
Singh, S.; Dash, P.; Moorhead, R.
2015-12-01
Lakes and estuaries can serve as indicators of overall health of terrestrial and aquatic ecosystems. The characteristics of dissolved organic matter (DOM) in these water bodies provide insights into the biogeochemical processes undergoing at the source, during transport and in the water bodies. Land use and land cover plays not only a significant role in controlling the quantity of the exported DOM, but also influences the quality of DOM via various biogeochemical and biodegradation processes. We investigated the characteristics and spatial distribution of DOM in five major lakes - Sardis, Enid, Grenada, Okatibbee, and Ross Barnett Reservoir (RBR), an estuary, the Lower Pearl River (LPR) Estuary, and a coastal region, Grand Bay, in the state of Mississippi, USA. Water samples from the lakes and Grand Bay were collected during the summer of 2012-2014 while samples from LPR were collected during winter 2014 and spring 2015. We employed absorption and fluorescence spectroscopy including excitation emission matrix (EEM) combined with parallel factor analysis (PARAFAC) modeling techniques to determine optical properties of DOM and its characteristics in these study sites. A site-specific PARAFAC model was developed to evaluate DOM characteristics, which resulted in five diverse DOM compositions including two terrestrial humic-like, two microbial humic-like, and one protein-like DOM. The lakes and Grand Bay region showed high concentrations of microbial humic-like or protein-like DOM fluorescence signatures while the samples from LPR Estuary and the RBR showed relatively high concentration of terrestrial humic-like DOM. Moreover, we also observed strong correlations between microbial humic-like DOM (PARAFAC derived) and DOM indices such as biological/freshness and fluorescence indices (EEM based). DOM in the lakes indicated autochthonous characteristics predominantly probably because of photochemical degradation while the LPR Estuary and the RBR samples showed mainly
Pérez-Sánchez, Julio; Senent-Aparicio, Javier
2017-08-01
Dry spells are an essential concept of drought climatology that clearly defines the semiarid Mediterranean environment and whose consequences are a defining feature for an ecosystem, so vulnerable with regard to water. The present study was conducted to characterize rainfall drought in the Segura River basin located in eastern Spain, marked by the self seasonal nature of these latitudes. A daily precipitation set has been utilized for 29 weather stations during a period of 20 years (1993-2013). Furthermore, four sets of dry spell length (complete series, monthly maximum, seasonal maximum, and annual maximum) are used and simulated for all the weather stations with the following probability distribution functions: Burr, Dagum, error, generalized extreme value, generalized logistic, generalized Pareto, Gumbel Max, inverse Gaussian, Johnson SB, Log-Logistic, Log-Pearson 3, Triangular, Weibull, and Wakeby. Only the series of annual maximum spell offer a good adjustment for all the weather stations, thereby gaining the role of Wakeby as the best result, with a p value means of 0.9424 for the Kolmogorov-Smirnov test (0.2 significance level). Probability of dry spell duration for return periods of 2, 5, 10, and 25 years maps reveal the northeast-southeast gradient, increasing periods with annual rainfall of less than 0.1 mm in the eastern third of the basin, in the proximity of the Mediterranean slope.
Ramos, Alessandro Candido Lopes [CELG - Companhia Energetica de Goias, Goiania, GO (Brazil). Generation and Transmission. System' s Operation Center], E-mail: alessandro.clr@celg.com.br; Batista, Adalberto Jose [Universidade Federal de Goias (UFG), Goiania, GO (Brazil)], E-mail: batista@eee.ufg.br; Leborgne, Roberto Chouhy [Universidade Federal do Rio Grande do Sul (UFRS), Porto Alegre, RS (Brazil)], E-mail: rcl@ece.ufrgs.br; Emiliano, Pedro Henrique Mota, E-mail: ph@phph.com.br
2009-07-01
This article presents the impact of distributed generation in studies of voltage sags caused by faults in the electrical system. We simulated short-circuit-to-ground in 62 lines of 230, 138, 69 and 13.8 kV that are part of the electrical system of the city of Goiania, of Goias state . For each fault position was monitored the bus voltage of 380 V in an industrial consumer sensitive to such sags. Were inserted different levels of GD near the consumer. The simulations of a short circuit, with the monitoring bar 380 V, were performed again. A study using stochastic simulation Monte Carlo (SMC) was performed to obtain, at each level of GD, the probability curves and sags of the probability density and its voltage class. With these curves were obtained the average number of sags according to each class, that the consumer bar may be submitted annually. The simulations were performed using the Program Analysis of Simultaneous Faults - ANAFAS. In order to overcome the intrinsic limitations of the methods of simulation of this program and allow data entry via windows, a computational tool was developed in Java language. Data processing was done using the MATLAB software.
Rozgacheva, I K
2016-01-01
A precise solution of the general relativity equations for centrally symmetric distribution of a pseudoscalar field with U(1) symmetry is presented. It is found that energy density of the field is restricted in the symmetry center and falls with radial distance at a much slower rate than in numerical simulations of dark matter haloes. This result can solve the cusp problem for galaxies and the problem of rotation curves for galaxy exteriors.