Probability, random variables, and random processes theory and signal processing applications
Shynk, John J
2012-01-01
Probability, Random Variables, and Random Processes is a comprehensive textbook on probability theory for engineers that provides a more rigorous mathematical framework than is usually encountered in undergraduate courses. It is intended for first-year graduate students who have some familiarity with probability and random variables, though not necessarily of random processes and systems that operate on random signals. It is also appropriate for advanced undergraduate students who have a strong mathematical background. The book has the following features: Several app
A lower bound on the probability that a binomial random variable is exceeding its mean
Pelekis, Christos; Ramon, Jan
2016-01-01
We provide a lower bound on the probability that a binomial random variable is exceeding its mean. Our proof employs estimates on the mean absolute deviation and the tail conditional expectation of binomial random variables.
From, Steven G.
2010-01-01
We present several new bounds for certain sums of deviation probabilities involving sums of nonnegative random variables. These are based upon upper bounds for the moment generating functions of the sums. We compare these new bounds to those of Maurer [2], Bernstein [4], Pinelis [16], and Bentkus [3]. We also briefly discuss the infinitely divisible distributions case.
Computer simulation of random variables and vectors with arbitrary probability distribution laws
Bogdan, V. M.
1981-01-01
Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 2. On Randomness and Probability How to Mathematically Model Uncertain Events ... Author Affiliations. Rajeeva L Karandikar1. Statistics and Mathematics Unit, Indian Statistical Institute, 7 S J S Sansanwal Marg, New Delhi 110 016, India.
Indian Academy of Sciences (India)
casinos and gambling houses? How does one interpret a statement like "there is a 30 per cent chance of rain tonight" - a statement we often hear on the news? Such questions arise in the mind of every student when she/he is taught probability as part of mathematics. Many students who go on to study probability and ...
Free probability and random matrices
Mingo, James A
2017-01-01
This volume opens the world of free probability to a wide variety of readers. From its roots in the theory of operator algebras, free probability has intertwined with non-crossing partitions, random matrices, applications in wireless communications, representation theory of large groups, quantum groups, the invariant subspace problem, large deviations, subfactors, and beyond. This book puts a special emphasis on the relation of free probability to random matrices, but also touches upon the operator algebraic, combinatorial, and analytic aspects of the theory. The book serves as a combination textbook/research monograph, with self-contained chapters, exercises scattered throughout the text, and coverage of important ongoing progress of the theory. It will appeal to graduate students and all mathematicians interested in random matrices and free probability from the point of view of operator algebras, combinatorics, analytic functions, or applications in engineering and statistical physics.
Random phenomena fundamentals of probability and statistics for engineers
Ogunnaike, Babatunde A
2009-01-01
PreludeApproach PhilosophyFour Basic PrinciplesI FoundationsTwo Motivating ExamplesYield Improvement in a Chemical ProcessQuality Assurance in a Glass Sheet Manufacturing ProcessOutline of a Systematic ApproachRandom Phenomena, Variability, and UncertaintyTwo Extreme Idealizations of Natural PhenomenaRandom Mass PhenomenaIntroducing ProbabilityThe Probabilistic FrameworkII ProbabilityFundamentals of Probability TheoryBuilding BlocksOperationsProbabilityConditional ProbabilityIndependenceRandom Variables and DistributionsDistributionsMathematical ExpectationCharacterizing DistributionsSpecial Derived Probability FunctionsMultidimensional Random VariablesDistributions of Several Random VariablesDistributional Characteristics of Jointly Distributed Random VariablesRandom Variable TransformationsSingle Variable TransformationsBivariate TransformationsGeneral Multivariate TransformationsApplication Case Studies I: ProbabilityMendel and HeredityWorld War II Warship Tactical Response Under AttackIII DistributionsIde...
Students' Misconceptions about Random Variables
Kachapova, Farida; Kachapov, Ilias
2012-01-01
This article describes some misconceptions about random variables and related counter-examples, and makes suggestions about teaching initial topics on random variables in general form instead of doing it separately for discrete and continuous cases. The focus is on post-calculus probability courses. (Contains 2 figures.)
Probability of Failure in Random Vibration
DEFF Research Database (Denmark)
Nielsen, Søren R.K.; Sørensen, John Dalsgaard
1988-01-01
Close approximations to the first-passage probability of failure in random vibration can be obtained by integral equation methods. A simple relation exists between the first-passage probability density function and the distribution function for the time interval spent below a barrier before out...
Fundamentals of applied probability and random processes
Ibe, Oliver
2005-01-01
This book is based on the premise that engineers use probability as a modeling tool, and that probability can be applied to the solution of engineering problems. Engineers and students studying probability and random processes also need to analyze data, and thus need some knowledge of statistics. This book is designed to provide students with a thorough grounding in probability and stochastic processes, demonstrate their applicability to real-world problems, and introduce the basics of statistics. The book''s clear writing style and homework problems make it ideal for the classroom or for self-study.* Good and solid introduction to probability theory and stochastic processes * Logically organized; writing is presented in a clear manner * Choice of topics is comprehensive within the area of probability * Ample homework problems are organized into chapter sections
Nonequilibrium random matrix theory: Transition probabilities
Pedro, Francisco Gil; Westphal, Alexander
2017-03-01
In this paper we present an analytic method for calculating the transition probability between two random Gaussian matrices with given eigenvalue spectra in the context of Dyson Brownian motion. We show that in the Coulomb gas language, in large N limit, memory of the initial state is preserved in the form of a universal linear potential acting on the eigenvalues. We compute the likelihood of any given transition as a function of time, showing that as memory of the initial state is lost, transition probabilities converge to those of the static ensemble.
Voiculescu, Dan; Nica, Alexandru
1992-01-01
This book presents the first comprehensive introduction to free probability theory, a highly noncommutative probability theory with independence based on free products instead of tensor products. Basic examples of this kind of theory are provided by convolution operators on free groups and by the asymptotic behavior of large Gaussian random matrices. The probabilistic approach to free products has led to a recent surge of new results on the von Neumann algebras of free groups. The book is ideally suited as a textbook for an advanced graduate course and could also provide material for a seminar. In addition to researchers and graduate students in mathematics, this book will be of interest to physicists and others who use random matrices.
Negative probability of random multiplier in turbulence
Bai, Xuan; Su, Weidong
2017-11-01
The random multiplicative process (RMP), which has been proposed for over 50 years, is a convenient phenomenological ansatz of turbulence cascade. In the RMP, the fluctuation in a large scale is statistically mapped to the one in a small scale by the linear action of an independent random multiplier (RM). Simple as it is, the RMP is powerful enough since all of the known scaling laws can be included in this model. So far as we know, however, a direct extraction for the probability density function (PDF) of RM has been absent yet. The reason is the deconvolution during the process is ill-posed. Nevertheless, with the progress in the studies of inverse problems, the situation can be changed. By using some new regularization techniques, for the first time we recover the PDFs of the RMs in some turbulent flows. All the consistent results from various methods point to an amazing observation-the PDFs can attain negative values in some intervals; and this can also be justified by some properties of infinitely divisible distributions. Despite the conceptual unconventionality, the present study illustrates the implications of negative probability in turbulence in several aspects, with emphasis on its role in describing the interaction between fluctuations at different scales. This work is supported by the NSFC (No. 11221062 and No. 11521091).
Probability Distributions for Random Quantum Operations
Schultz, Kevin
Motivated by uncertainty quantification and inference of quantum information systems, in this work we draw connections between the notions of random quantum states and operations in quantum information with probability distributions commonly encountered in the field of orientation statistics. This approach identifies natural sample spaces and probability distributions upon these spaces that can be used in the analysis, simulation, and inference of quantum information systems. The theory of exponential families on Stiefel manifolds provides the appropriate generalization to the classical case. Furthermore, this viewpoint motivates a number of additional questions into the convex geometry of quantum operations relative to both the differential geometry of Stiefel manifolds as well as the information geometry of exponential families defined upon them. In particular, we draw on results from convex geometry to characterize which quantum operations can be represented as the average of a random quantum operation. This project was supported by the Intelligence Advanced Research Projects Activity via Department of Interior National Business Center Contract Number 2012-12050800010.
Probability, random processes, and ergodic properties
Gray, Robert M
1988-01-01
This book has been written for several reasons, not all of which are academic. This material was for many years the first half of a book in progress on information and ergodic theory. The intent was and is to provide a reasonably self-contained advanced treatment of measure theory, prob ability theory, and the theory of discrete time random processes with an emphasis on general alphabets and on ergodic and stationary properties of random processes that might be neither ergodic nor stationary. The intended audience was mathematically inc1ined engineering graduate students and visiting scholars who had not had formal courses in measure theoretic probability . Much of the material is familiar stuff for mathematicians, but many of the topics and results have not previously appeared in books. The original project grew too large and the first part contained much that would likely bore mathematicians and dis courage them from the second part. Hence I finally followed the suggestion to separate the material and split...
Strong Decomposition of Random Variables
DEFF Research Database (Denmark)
Hoffmann-Jørgensen, Jørgen; Kagan, Abram M.; Pitt, Loren D.
2007-01-01
A random variable X is stongly decomposable if X=Y+Z where Y=Φ(X) and Z=X-Φ(X) are independent non-degenerated random variables (called the components). It is shown that at least one of the components is singular, and we derive a necessary and sufficient condition for strong decomposability...... of a discrete random variable....
Symmetrization of binary random variables
Kagan, Abram; Mallows, Colin L.; Shepp, Larry A.; Vanderbei, Robert J.; Vardi, Yehuda
1999-01-01
A random variable [math] is called an independent symmetrizer of a given random variable [math] if (a) it is independent of [math] and (b) the distribution of [math] is symmetric about [math] . In cases where the distribution of [math] is symmetric about its mean, it is easy to see that the constant random variable [math] is a minimum-variance independent symmetrizer. Taking [math] to have the same distribution as [math] clearly produces a symmetric sum, but it may not be of minimum variance....
Contextuality in canonical systems of random variables.
Dzhafarov, Ehtibar N; Cervantes, Víctor H; Kujala, Janne V
2017-11-13
Random variables representing measurements, broadly understood to include any responses to any inputs, form a system in which each of them is uniquely identified by its content (that which it measures) and its context (the conditions under which it is recorded). Two random variables are jointly distributed if and only if they share a context. In a canonical representation of a system, all random variables are binary, and every content-sharing pair of random variables has a unique maximal coupling (the joint distribution imposed on them so that they coincide with maximal possible probability). The system is contextual if these maximal couplings are incompatible with the joint distributions of the context-sharing random variables. We propose to represent any system of measurements in a canonical form and to consider the system contextual if and only if its canonical representation is contextual. As an illustration, we establish a criterion for contextuality of the canonical system consisting of all dichotomizations of a single pair of content-sharing categorical random variables.This article is part of the themed issue 'Second quantum revolution: foundational questions'. © 2017 The Author(s).
Contextuality in canonical systems of random variables
Dzhafarov, Ehtibar N.; Cervantes, Víctor H.; Kujala, Janne V.
2017-10-01
Random variables representing measurements, broadly understood to include any responses to any inputs, form a system in which each of them is uniquely identified by its content (that which it measures) and its context (the conditions under which it is recorded). Two random variables are jointly distributed if and only if they share a context. In a canonical representation of a system, all random variables are binary, and every content-sharing pair of random variables has a unique maximal coupling (the joint distribution imposed on them so that they coincide with maximal possible probability). The system is contextual if these maximal couplings are incompatible with the joint distributions of the context-sharing random variables. We propose to represent any system of measurements in a canonical form and to consider the system contextual if and only if its canonical representation is contextual. As an illustration, we establish a criterion for contextuality of the canonical system consisting of all dichotomizations of a single pair of content-sharing categorical random variables. This article is part of the themed issue `Second quantum revolution: foundational questions'.
Hybrid computer technique yields random signal probability distributions
Cameron, W. D.
1965-01-01
Hybrid computer determines the probability distributions of instantaneous and peak amplitudes of random signals. This combined digital and analog computer system reduces the errors and delays of manual data analysis.
Sharp Bounds by Probability-Generating Functions and Variable Drift
DEFF Research Database (Denmark)
Doerr, Benjamin; Fouz, Mahmoud; Witt, Carsten
2011-01-01
We introduce to the runtime analysis of evolutionary algorithms two powerful techniques: probability-generating functions and variable drift analysis. They are shown to provide a clean framework for proving sharp upper and lower bounds. As an application, we improve the results by Doerr et al...
Probability of stress-corrosion fracture under random loading
Yang, J. N.
1974-01-01
Mathematical formulation is based on cumulative-damage hypothesis and experimentally-determined stress-corrosion characteristics. Under both stationary random loadings, mean value and variance of cumulative damage are obtained. Probability of stress-corrosion fracture is then evaluated, using principle of maximum entropy.
Non-equilibrium random matrix theory. Transition probabilities
Energy Technology Data Exchange (ETDEWEB)
Pedro, Francisco Gil [Univ. Autonoma de Madrid (Spain). Dept. de Fisica Teorica; Westphal, Alexander [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Gruppe Theorie
2016-06-15
In this letter we present an analytic method for calculating the transition probability between two random Gaussian matrices with given eigenvalue spectra in the context of Dyson Brownian motion. We show that in the Coulomb gas language, in large N limit, memory of the initial state is preserved in the form of a universal linear potential acting on the eigenvalues. We compute the likelihood of any given transition as a function of time, showing that as memory of the initial state is lost, transition probabilities converge to those of the static ensemble.
Crossing probability for directed polymers in random media. II. Exact tail of the distribution.
De Luca, Andrea; Le Doussal, Pierre
2016-03-01
We study the probability p ≡ p(η)(t) that two directed polymers in a given random potential η and with fixed and nearby endpoints do not cross until time t. This probability is itself a random variable (over samples η), which, as we show, acquires a very broad probability distribution at large time. In particular, the moments of p are found to be dominated by atypical samples where p is of order unity. Building on a formula established by us in a previous work using nested Bethe ansatz and Macdonald process methods, we obtain analytically the leading large time behavior of all moments p(m) ≃ γ(m)/t. From this, we extract the exact tail ∼ρ(p)/t of the probability distribution of the noncrossing probability at large time. The exact formula is compared to numerical simulations, with excellent agreement.
Problems in probability theory, mathematical statistics and theory of random functions
Sveshnikov, A A
1979-01-01
Problem solving is the main thrust of this excellent, well-organized workbook. Suitable for students at all levels in probability theory and statistics, the book presents over 1,000 problems and their solutions, illustrating fundamental theory and representative applications in the following fields: Random Events; Distribution Laws; Correlation Theory; Random Variables; Entropy & Information; Markov Processes; Systems of Random Variables; Limit Theorems; Data Processing; and more.The coverage of topics is both broad and deep, ranging from the most elementary combinatorial problems through lim
Reduction of the Random Variables of the Turbulent Wind Field
DEFF Research Database (Denmark)
Sichani, Mahdi Teimouri; Nielsen, Søren R.K.
2012-01-01
Applicability of the Probability Density Evolution Method (PDEM) for realizing evolution of the probability density for the wind turbines has rather strict bounds on the basic number of the random variables involved in the model. The efficiency of most of the Advanced Monte Carlo (AMC) methods, i.......e. Importance Sampling (IS) or Subset Simulation (SS), will be deteriorated on problems with many random variables. The problem with PDEM is that a multidimensional integral has to be carried out over the space defined by the random variables of the system. The numerical procedure requires discretization...... of the integral domain; this becomes increasingly difficult as the dimensions of the integral domain increase. On the other hand efficiency of the AMC methods is closely dependent on the design points of the problem. Presence of many random variables may increase the number of the design points, hence affects...
Separation metrics for real-valued random variables
Directory of Open Access Journals (Sweden)
Michael D. Taylor
1984-01-01
Full Text Available If W is a fixed, real-valued random variable, then there are simple and easily satisfied conditions under which the function dW, where dW(X,Y= the probability that W separates the real-valued random variables X and Y, turns out to be a metric. The observation was suggested by work done in [1].
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.
1983-01-01
Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Application of random match probability calculations to mixed STR profiles.
Bille, Todd; Bright, Jo-Anne; Buckleton, John
2013-03-01
Mixed DNA profiles are being encountered more frequently as laboratories analyze increasing amounts of touch evidence. If it is determined that an individual could be a possible contributor to the mixture, it is necessary to perform a statistical analysis to allow an assignment of weight to the evidence. Currently, the combined probability of inclusion (CPI) and the likelihood ratio (LR) are the most commonly used methods to perform the statistical analysis. A third method, random match probability (RMP), is available. This article compares the advantages and disadvantages of the CPI and LR methods to the RMP method. We demonstrate that although the LR method is still considered the most powerful of the binary methods, the RMP and LR methods make similar use of the observed data such as peak height, assumed number of contributors, and known contributors where the CPI calculation tends to waste information and be less informative. © 2013 American Academy of Forensic Sciences.
Probability and Random Processes With Applications to Signal Processing and Communications
Miller, Scott
2012-01-01
Miller and Childers have focused on creating a clear presentation of foundational concepts with specific applications to signal processing and communications, clearly the two areas of most interest to students and instructors in this course. It is aimed at graduate students as well as practicing engineers, and includes unique chapters on narrowband random processes and simulation techniques. The appendices provide a refresher in such areas as linear algebra, set theory, random variables, and more. Probability and Random Processes also includes applications in digital communications, informati
Comets, F
2003-01-01
We consider a one-dimensional random walk in random environment in the Sinai's regime. Our main result is that logarithms of the transition probabilities, after a suitable rescaling, converge in distribution as time tends to infinity, to some functional of the Brownian motion. We compute the law of this functional when the initial and final points agree. Also, among other things, we estimate the probability of being at time~$t$ at distance at least $z$ from the initial position, when $z$ is larger than $\\ln^2 t$, but still of logarithmic order in time.
Haranas, Ioannis; Gkigkitzis, Ioannis; Zouganelis, George D.; Haranas, Maria K.; Kirk, Samantha
2014-11-01
In this chapter, we study Sedimentation -- the effects of the acceleration gravity on the sedimentation deposition probability, as well as the aerosol deposition rate on the surface of the Earth and Mars, but also aboard a spacecraft in orbit around Earth and Mars as well for particles with density ρ p = 1,300 kg/m3, diameters d p = 1, 3, 5 μm, and residence times t = 0.0272, 0.2 s, respectively. For particles of diameter 1 μm we find that, on the surface of Earth and Mars the deposition probabilities are higher at the poles when compared to the ones at the equator. Similarly, on the surface of the Earth we find that the deposition probabilities exhibit 0.5 and 0.4 % higher percentage difference at the poles when compared to that of the equator, for the corresponding residence times. Moreover in orbit equatorial orbits result to higher deposition probabilities when compared to polar ones. For both residence times particles with the diameters considered above in circular and elliptical orbits around Mars, the deposition probabilities appear to be the same for all orbital inclinations. Sedimentation probability increases drastically with particle diameter and orbital eccentricity of the orbiting spacecraft. Finally, as an alternative framework for the study of interaction and the effect of gravity in biology, and in particular gravity and the respiratory system we introduce is the term information in a way Shannon has introduced it, considering the sedimentation probability as a random variable. This can be thought as a way in which gravity enters the cognitive processes of the system (processing of information) in the cybernetic sense.
Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros
Bancroft, Stacie L; Bourret, Jason C
2008-01-01
Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values. PMID:18595286
DEFF Research Database (Denmark)
Rojas-Nandayapa, Leonardo
Tail probabilities of sums of heavy-tailed random variables are of a major importance in various branches of Applied Probability, such as Risk Theory, Queueing Theory, Financial Management, and are subject to intense research nowadays. To understand their relevance one just needs to think....... By doing so, we will obtain a deeper insight into how events involving large values of sums of heavy-tailed random variables are likely to occur....
Probability on graphs random processes on graphs and lattices
Grimmett, Geoffrey
2018-01-01
This introduction to some of the principal models in the theory of disordered systems leads the reader through the basics, to the very edge of contemporary research, with the minimum of technical fuss. Topics covered include random walk, percolation, self-avoiding walk, interacting particle systems, uniform spanning tree, random graphs, as well as the Ising, Potts, and random-cluster models for ferromagnetism, and the Lorentz model for motion in a random medium. This new edition features accounts of major recent progress, including the exact value of the connective constant of the hexagonal lattice, and the critical point of the random-cluster model on the square lattice. The choice of topics is strongly motivated by modern applications, and focuses on areas that merit further research. Accessible to a wide audience of mathematicians and physicists, this book can be used as a graduate course text. Each chapter ends with a range of exercises.
Limit theorems for multi-indexed sums of random variables
Klesov, Oleg
2014-01-01
Presenting the first unified treatment of limit theorems for multiple sums of independent random variables, this volume fills an important gap in the field. Several new results are introduced, even in the classical setting, as well as some new approaches that are simpler than those already established in the literature. In particular, new proofs of the strong law of large numbers and the Hajek-Renyi inequality are detailed. Applications of the described theory include Gibbs fields, spin glasses, polymer models, image analysis and random shapes. Limit theorems form the backbone of probability theory and statistical theory alike. The theory of multiple sums of random variables is a direct generalization of the classical study of limit theorems, whose importance and wide application in science is unquestionable. However, to date, the subject of multiple sums has only been treated in journals. The results described in this book will be of interest to advanced undergraduates, graduate students and researchers who ...
Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin
2003-01-01
A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...
Probability theory and mathematical statistics for engineers
Pugachev, V S
1984-01-01
Probability Theory and Mathematical Statistics for Engineers focuses on the concepts of probability theory and mathematical statistics for finite-dimensional random variables.The publication first underscores the probabilities of events, random variables, and numerical characteristics of random variables. Discussions focus on canonical expansions of random vectors, second-order moments of random vectors, generalization of the density concept, entropy of a distribution, direct evaluation of probabilities, and conditional probabilities. The text then examines projections of random vector
Probabilistic graphs using coupled random variables
Nelson, Kenric P.; Barbu, Madalina; Scannell, Brian J.
2014-05-01
Neural network design has utilized flexible nonlinear processes which can mimic biological systems, but has suffered from a lack of traceability in the resulting network. Graphical probabilistic models ground network design in probabilistic reasoning, but the restrictions reduce the expressive capability of each node making network designs complex. The ability to model coupled random variables using the calculus of nonextensive statistical mechanics provides a neural node design incorporating nonlinear coupling between input states while maintaining the rigor of probabilistic reasoning. A generalization of Bayes rule using the coupled product enables a single node to model correlation between hundreds of random variables. A coupled Markov random field is designed for the inferencing and classification of UCI's MLR `Multiple Features Data Set' such that thousands of linear correlation parameters can be replaced with a single coupling parameter with just a (3%, 4%) reduction in (classification, inference) performance.
Fuzzy random variables — II. Algorithms and examples for the discrete case
Kwakernaak, H.
1979-01-01
The results obtained in part I of the paper are specialized to the case of discrete fuzzy random variables. A more intuitive interpretation is given of the notion of fuzzy random variables. Algorithms are derived for determining expectations, fuzzy probabilities, fuzzy conditional expectations and
PItcHPERFeCT: Primary Intracranial Hemorrhage Probability Estimation using Random Forests on CT.
Muschelli, John; Sweeney, Elizabeth M; Ullman, Natalie L; Vespa, Paul; Hanley, Daniel F; Crainiceanu, Ciprian M
2017-01-01
Intracerebral hemorrhage (ICH), where a blood vessel ruptures into areas of the brain, accounts for approximately 10-15% of all strokes. X-ray computed tomography (CT) scanning is largely used to assess the location and volume of these hemorrhages. Manual segmentation of the CT scan using planimetry by an expert reader is the gold standard for volume estimation, but is time-consuming and has within- and across-reader variability. We propose a fully automated segmentation approach using a random forest algorithm with features extracted from X-ray computed tomography (CT) scans. The Minimally Invasive Surgery plus rt-PA in ICH Evacuation (MISTIE) trial was a multi-site Phase II clinical trial that tested the safety of hemorrhage removal using recombinant-tissue plasminogen activator (rt-PA). For this analysis, we use 112 baseline CT scans from patients enrolled in the MISTE trial, one CT scan per patient. ICH was manually segmented on these CT scans by expert readers. We derived a set of imaging predictors from each scan. Using 10 randomly-selected scans, we used a first-pass voxel selection procedure based on quantiles of a set of predictors and then built 4 models estimating the voxel-level probability of ICH. The models used were: 1) logistic regression, 2) logistic regression with a penalty on the model parameters using LASSO, 3) a generalized additive model (GAM) and 4) a random forest classifier. The remaining 102 scans were used for model validation.For each validation scan, the model predicted the probability of ICH at each voxel. These voxel-level probabilities were then thresholded to produce binary segmentations of the hemorrhage. These masks were compared to the manual segmentations using the Dice Similarity Index (DSI) and the correlation of hemorrhage volume of between the two segmentations. We tested equality of median DSI using the Kruskal-Wallis test across the 4 models. We tested equality of the median DSI from sets of 2 models using a Wilcoxon
Maximal Inequalities for Dependent Random Variables
DEFF Research Database (Denmark)
Hoffmann-Jorgensen, Jorgen
2016-01-01
Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X......-k. Then a maximal inequality gives conditions ensuring that the maximal partial sum M-n = max(1) (...
Fast Generation of Discrete Random Variables
Directory of Open Access Journals (Sweden)
George Marsaglia
2004-07-01
Full Text Available We describe two methods and provide C programs for generating discrete random variables with functions that are simple and fast, averaging ten times as fast as published methods and more than five times as fast as the fastest of those. We provide general procedures for implementing the two methods, as well as specific procedures for three of the most important discrete distributions: Poisson, binomial and hypergeometric.
From gap probabilities in random matrix theory to eigenvalue expansions
Bothner, Thomas
2016-02-01
We present a method to derive asymptotics of eigenvalues for trace-class integral operators K :{L}2(J;{{d}}λ )\\circlearrowleft , acting on a single interval J\\subset {{R}}, which belongs to the ring of integrable operators (Its et al 1990 Int. J. Mod. Phys. B 4 1003-37 ). Our emphasis lies on the behavior of the spectrum \\{{λ }i(J)\\}{}i=0∞ of K as | J| \\to ∞ and i is fixed. We show that this behavior is intimately linked to the analysis of the Fredholm determinant {det}(I-γ K){| }{L2(J)} as | J| \\to ∞ and γ \\uparrow 1 in a Stokes type scaling regime. Concrete asymptotic formulæ are obtained for the eigenvalues of Airy and Bessel kernels in random matrix theory. Dedicated to Percy Deift and Craig Tracy on the occasion of their 70th birthdays.
Brémaud, Pierre
2017-01-01
The emphasis in this book is placed on general models (Markov chains, random fields, random graphs), universal methods (the probabilistic method, the coupling method, the Stein-Chen method, martingale methods, the method of types) and versatile tools (Chernoff's bound, Hoeffding's inequality, Holley's inequality) whose domain of application extends far beyond the present text. Although the examples treated in the book relate to the possible applications, in the communication and computing sciences, in operations research and in physics, this book is in the first instance concerned with theory. The level of the book is that of a beginning graduate course. It is self-contained, the prerequisites consisting merely of basic calculus (series) and basic linear algebra (matrices). The reader is not assumed to be trained in probability since the first chapters give in considerable detail the background necessary to understand the rest of the book. .
Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros
Bancroft, Stacie L; Bourret, Jason C
2008-01-01
Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to writ...
Fiedler, Daniela; Tröbst, Steffen; Harms, Ute
2017-01-01
Students of all ages face severe conceptual difficulties regarding key aspects of evolution—the central, unifying, and overarching theme in biology. Aspects strongly related to abstract “threshold” concepts like randomness and probability appear to pose particular difficulties. A further problem is the lack of an appropriate instrument for assessing students’ conceptual knowledge of randomness and probability in the context of evolution. To address this problem, we have developed two instruments, Randomness and Probability Test in the Context of Evolution (RaProEvo) and Randomness and Probability Test in the Context of Mathematics (RaProMath), that include both multiple-choice and free-response items. The instruments were administered to 140 university students in Germany, then the Rasch partial-credit model was applied to assess them. The results indicate that the instruments generate reliable and valid inferences about students’ conceptual knowledge of randomness and probability in the two contexts (which are separable competencies). Furthermore, RaProEvo detected significant differences in knowledge of randomness and probability, as well as evolutionary theory, between biology majors and preservice biology teachers. PMID:28572180
Concepts of probability theory
Pfeiffer, Paul E
1979-01-01
Using the Kolmogorov model, this intermediate-level text discusses random variables, probability distributions, mathematical expectation, random processes, more. For advanced undergraduates students of science, engineering, or math. Includes problems with answers and six appendixes. 1965 edition.
Entropy power inequality for a family of discrete random variables
Sharma, Naresh; Muthukrishnan, Siddharth
2010-01-01
It is known that the Entropy Power Inequality (EPI) always holds if the random variables have density. Not much work has been done to identify discrete distributions for which the inequality holds with the differential entropy replaced by the discrete entropy. Harremo\\"{e}s and Vignat showed that it holds for the pair (B(m,p), B(n,p)), m,n \\in \\mathbb{N}, (where B(n,p) is a Binomial distribution with n trials each with success probability p) for p = 0.5. In this paper, we considerably expand the set of Binomial distributions for which the inequality holds and, in particular, identify n_0(p) such that for all m,n \\geq n_0(p), the EPI holds for (B(m,p), B(n,p)). We further show that the EPI holds for the discrete random variables that can be expressed as the sum of n independent identical distributed (IID) discrete random variables for large n.
Chandrasekar, A; Rakkiyappan, R; Cao, Jinde
2015-10-01
This paper studies the impulsive synchronization of Markovian jumping randomly coupled neural networks with partly unknown transition probabilities via multiple integral approach. The array of neural networks are coupled in a random fashion which is governed by Bernoulli random variable. The aim of this paper is to obtain the synchronization criteria, which is suitable for both exactly known and partly unknown transition probabilities such that the coupled neural network is synchronized with mixed time-delay. The considered impulsive effects can be synchronized at partly unknown transition probabilities. Besides, a multiple integral approach is also proposed to strengthen the Markovian jumping randomly coupled neural networks with partly unknown transition probabilities. By making use of Kronecker product and some useful integral inequalities, a novel Lyapunov-Krasovskii functional was designed for handling the coupled neural network with mixed delay and then impulsive synchronization criteria are solvable in a set of linear matrix inequalities. Finally, numerical examples are presented to illustrate the effectiveness and advantages of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Generation of correlated finite alphabet waveforms using gaussian random variables
Jardak, Seifallah
2014-09-01
Correlated waveforms have a number of applications in different fields, such as radar and communication. It is very easy to generate correlated waveforms using infinite alphabets, but for some of the applications, it is very challenging to use them in practice. Moreover, to generate infinite alphabet constant envelope correlated waveforms, the available research uses iterative algorithms, which are computationally very expensive. In this work, we propose simple novel methods to generate correlated waveforms using finite alphabet constant and non-constant-envelope symbols. To generate finite alphabet waveforms, the proposed method map the Gaussian random variables onto the phase-shift-keying, pulse-amplitude, and quadrature-amplitude modulation schemes. For such mapping, the probability-density-function of Gaussian random variables is divided into M regions, where M is the number of alphabets in the corresponding modulation scheme. By exploiting the mapping function, the relationship between the cross-correlation of Gaussian and finite alphabet symbols is derived. To generate equiprobable symbols, the area of each region is kept same. If the requirement is to have each symbol with its own unique probability, the proposed scheme allows us that as well. Although, the proposed scheme is general, the main focus of this paper is to generate finite alphabet waveforms for multiple-input multiple-output radar, where correlated waveforms are used to achieve desired beampatterns. © 2014 IEEE.
Liu, Xian; Engel, Charles C
2012-12-20
Researchers often encounter longitudinal health data characterized with three or more ordinal or nominal categories. Random-effects multinomial logit models are generally applied to account for potential lack of independence inherent in such clustered data. When parameter estimates are used to describe longitudinal processes, however, random effects, both between and within individuals, need to be retransformed for correctly predicting outcome probabilities. This study attempts to go beyond existing work by developing a retransformation method that derives longitudinal growth trajectories of unbiased health probabilities. We estimated variances of the predicted probabilities by using the delta method. Additionally, we transformed the covariates' regression coefficients on the multinomial logit function, not substantively meaningful, to the conditional effects on the predicted probabilities. The empirical illustration uses the longitudinal data from the Asset and Health Dynamics among the Oldest Old. Our analysis compared three sets of the predicted probabilities of three health states at six time points, obtained from, respectively, the retransformation method, the best linear unbiased prediction, and the fixed-effects approach. The results demonstrate that neglect of retransforming random errors in the random-effects multinomial logit model results in severely biased longitudinal trajectories of health probabilities as well as overestimated effects of covariates on the probabilities. Copyright © 2012 John Wiley & Sons, Ltd.
Selection for altruism through random drift in variable size populations.
Houchmandzadeh, Bahram; Vallade, Marcel
2012-05-10
Altruistic behavior is defined as helping others at a cost to oneself and a lowered fitness. The lower fitness implies that altruists should be selected against, which is in contradiction with their widespread presence is nature. Present models of selection for altruism (kin or multilevel) show that altruistic behaviors can have 'hidden' advantages if the 'common good' produced by altruists is restricted to some related or unrelated groups. These models are mostly deterministic, or assume a frequency dependent fitness. Evolutionary dynamics is a competition between deterministic selection pressure and stochastic events due to random sampling from one generation to the next. We show here that an altruistic allele extending the carrying capacity of the habitat can win by increasing the random drift of "selfish" alleles. In other terms, the fixation probability of altruistic genes can be higher than those of a selfish ones, even though altruists have a smaller fitness. Moreover when populations are geographically structured, the altruists advantage can be highly amplified and the fixation probability of selfish genes can tend toward zero. The above results are obtained both by numerical and analytical calculations. Analytical results are obtained in the limit of large populations. The theory we present does not involve kin or multilevel selection, but is based on the existence of random drift in variable size populations. The model is a generalization of the original Fisher-Wright and Moran models where the carrying capacity depends on the number of altruists.
Selection for altruism through random drift in variable size populations
Directory of Open Access Journals (Sweden)
Houchmandzadeh Bahram
2012-05-01
Full Text Available Abstract Background Altruistic behavior is defined as helping others at a cost to oneself and a lowered fitness. The lower fitness implies that altruists should be selected against, which is in contradiction with their widespread presence is nature. Present models of selection for altruism (kin or multilevel show that altruistic behaviors can have ‘hidden’ advantages if the ‘common good’ produced by altruists is restricted to some related or unrelated groups. These models are mostly deterministic, or assume a frequency dependent fitness. Results Evolutionary dynamics is a competition between deterministic selection pressure and stochastic events due to random sampling from one generation to the next. We show here that an altruistic allele extending the carrying capacity of the habitat can win by increasing the random drift of “selfish” alleles. In other terms, the fixation probability of altruistic genes can be higher than those of a selfish ones, even though altruists have a smaller fitness. Moreover when populations are geographically structured, the altruists advantage can be highly amplified and the fixation probability of selfish genes can tend toward zero. The above results are obtained both by numerical and analytical calculations. Analytical results are obtained in the limit of large populations. Conclusions The theory we present does not involve kin or multilevel selection, but is based on the existence of random drift in variable size populations. The model is a generalization of the original Fisher-Wright and Moran models where the carrying capacity depends on the number of altruists.
Das, Jayajit; Mukherjee, Sayak; Hodge, Susan E
2015-07-01
A common statistical situation concerns inferring an unknown distribution Q(x) from a known distribution P(y), where X (dimension n), and Y (dimension m) have a known functional relationship. Most commonly, n ≤ m, and the task is relatively straightforward for well-defined functional relationships. For example, if Y1 and Y2 are independent random variables, each uniform on [0, 1], one can determine the distribution of X = Y1 + Y2; here m = 2 and n = 1. However, biological and physical situations can arise where n > m and the functional relation Y→X is non-unique. In general, in the absence of additional information, there is no unique solution to Q in those cases. Nevertheless, one may still want to draw some inferences about Q. To this end, we propose a novel maximum entropy (MaxEnt) approach that estimates Q(x) based only on the available data, namely, P(y). The method has the additional advantage that one does not need to explicitly calculate the Lagrange multipliers. In this paper we develop the approach, for both discrete and continuous probability distributions, and demonstrate its validity. We give an intuitive justification as well, and we illustrate with examples.
Probability and stochastic modeling
Rotar, Vladimir I
2012-01-01
Basic NotionsSample Space and EventsProbabilitiesCounting TechniquesIndependence and Conditional ProbabilityIndependenceConditioningThe Borel-Cantelli TheoremDiscrete Random VariablesRandom Variables and VectorsExpected ValueVariance and Other Moments. Inequalities for DeviationsSome Basic DistributionsConvergence of Random Variables. The Law of Large NumbersConditional ExpectationGenerating Functions. Branching Processes. Random Walk RevisitedBranching Processes Generating Functions Branching Processes Revisited More on Random WalkMarkov ChainsDefinitions and Examples. Probability Distributions of Markov ChainsThe First Step Analysis. Passage TimesVariables Defined on a Markov ChainErgodicity and Stationary DistributionsA Classification of States and ErgodicityContinuous Random VariablesContinuous DistributionsSome Basic Distributions Continuous Multivariate Distributions Sums of Independent Random Variables Conditional Distributions and ExpectationsDistributions in the General Case. SimulationDistribution F...
Cool, Geneviève; Lebel, Alexandre; Sadiq, Rehan; Rodriguez, Manuel J
2015-12-01
The regional variability of the probability of occurrence of high total trihalomethane (TTHM) levels was assessed using multilevel logistic regression models that incorporate environmental and infrastructure characteristics. The models were structured in a three-level hierarchical configuration: samples (first level), drinking water utilities (DWUs, second level) and natural regions, an ecological hierarchical division from the Quebec ecological framework of reference (third level). They considered six independent variables: precipitation, temperature, source type, seasons, treatment type and pH. The average probability of TTHM concentrations exceeding the targeted threshold was 18.1%. The probability was influenced by seasons, treatment type, precipitations and temperature. The variance at all levels was significant, showing that the probability of TTHM concentrations exceeding the threshold is most likely to be similar if located within the same DWU and within the same natural region. However, most of the variance initially attributed to natural regions was explained by treatment types and clarified by spatial aggregation on treatment types. Nevertheless, even after controlling for treatment type, there was still significant regional variability of the probability of TTHM concentrations exceeding the threshold. Regional variability was particularly important for DWUs using chlorination alone since they lack the appropriate treatment required to reduce the amount of natural organic matter (NOM) in source water prior to disinfection. Results presented herein could be of interest to authorities in identifying regions with specific needs regarding drinking water quality and for epidemiological studies identifying geographical variations in population exposure to disinfection by-products (DBPs).
Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros
Bancroft, Stacie L.; Bourret, Jason C.
2008-01-01
Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time.…
Maslennikova, Yu. S.; Nugmanov, I. S.
2016-08-01
The problem of probability density function estimation for a random process is one of the most common in practice. There are several methods to solve this problem. Presented laboratory work uses methods of the mathematical statistics to detect patterns in the realization of random process. On the basis of ergodic theory, we construct algorithm for estimating univariate probability density distribution function for a random process. Correlational analysis of realizations is applied to estimate the necessary size of the sample and the time of observation. Hypothesis testing for two probability distributions (normal and Cauchy) is used on the experimental data, using χ2 criterion. To facilitate understanding and clarity of the problem solved, we use ELVIS II platform and LabVIEW software package that allows us to make the necessary calculations, display results of the experiment and, most importantly, to control the experiment. At the same time students are introduced to a LabVIEW software package and its capabilities.
Escape probability and mean residence time in random flows with unsteady drift
Directory of Open Access Journals (Sweden)
Brannan James R.
2001-01-01
Full Text Available We investigate fluid transport in random velocity fields with unsteady drift. First, we propose to quantify fluid transport between flow regimes of different characteristic motion, by escape probability and mean residence time. We then develop numerical algorithms to solve for escape probability and mean residence time, which are described by backward Fokker-Planck type partial differential equations. A few computational issues are also discussed. Finally, we apply these ideas and numerical algorithms to a tidal flow model.
Approximations to the Probability of Failure in Random Vibration by Integral Equation Methods
DEFF Research Database (Denmark)
Nielsen, Søren R.K.; Sørensen, John Dalsgaard
Close approximations to the first passage probability of failure in random vibration can be obtained by integral equation methods. A simple relation exists between the first passage probability density function and the distribution function for the time interval spent below a barrier before...... outcrossing. An integral equation for the probability density function of the time interval is formulated, and adequate approximations for the kernel are suggested. The kernel approximation results in approximate solutions for the probability density function of the time interval, and hence for the first...... passage probability density. The results of the theory agree well with simulation results for narrow banded processes dominated by a single frequency, as well as for bimodal processes with 2 dominating frequencies in the structural response....
Introduction to probability with Mathematica
Hastings, Kevin J
2009-01-01
Discrete ProbabilityThe Cast of Characters Properties of Probability Simulation Random SamplingConditional ProbabilityIndependenceDiscrete DistributionsDiscrete Random Variables, Distributions, and ExpectationsBernoulli and Binomial Random VariablesGeometric and Negative Binomial Random Variables Poisson DistributionJoint, Marginal, and Conditional Distributions More on ExpectationContinuous ProbabilityFrom the Finite to the (Very) Infinite Continuous Random Variables and DistributionsContinuous ExpectationContinuous DistributionsThe Normal Distribution Bivariate Normal DistributionNew Random Variables from OldOrder Statistics Gamma DistributionsChi-Square, Student's t, and F-DistributionsTransformations of Normal Random VariablesAsymptotic TheoryStrong and Weak Laws of Large Numbers Central Limit TheoremStochastic Processes and ApplicationsMarkov ChainsPoisson Processes QueuesBrownian MotionFinancial MathematicsAppendixIntroduction to Mathematica Glossary of Mathematica Commands for Probability Short Answers...
Liu, Xian; Engel, Charles C.
2012-01-01
Researchers often encounter longitudinal health data characterized with three or more ordinal or nominal categories. Random-effects multinomial logit models are generally applied to account for potential lack of independence inherent in such clustered data. When parameter estimates are used to describe longitudinal processes, however, random effects, both between and within individuals, need to be retransformed for correctly predicting outcome probabilities. This study attempts to go beyond e...
The probability of a random straight line in two and three dimensions
Beckers, A.L.D.; Smeulders, A.W.M.
1990-01-01
Using properties of shift- and rotation-invariance probability density distributions are derived for random straight lines in normal representation. It is found that in two-dimensional space the distribution of normal coordinates (r, phi) is uniform: p(r, phi) = c, where c is a normalisation
Fiedler, Daniela; Tröbst, Steffen; Harms, Ute
2017-01-01
Students of all ages face severe conceptual difficulties regarding key aspects of evolution-- the central, unifying, and overarching theme in biology. Aspects strongly related to abstract "threshold" concepts like randomness and probability appear to pose particular difficulties. A further problem is the lack of an appropriate instrument…
The random effects prep continues to mispredict the probability of replication
Iverson, G.J.; Lee, M.D.; Wagenmakers, E.-J.
2010-01-01
In their reply, Lecoutre and Killeen (2010) argue for a random effects version of prep, in which the observed effect from one experiment is used to predict the probability that an effect from a different but related experiment will have the same sign. They present a figure giving the impression that
HABITAT ASSESSMENT USING A RANDOM PROBABILITY BASED SAMPLING DESIGN: ESCAMBIA RIVER DELTA, FLORIDA
Smith, Lisa M., Darrin D. Dantin and Steve Jordan. In press. Habitat Assessment Using a Random Probability Based Sampling Design: Escambia River Delta, Florida (Abstract). To be presented at the SWS/GERS Fall Joint Society Meeting: Communication and Collaboration: Coastal Systems...
Smith, Toni M.; Hjalmarson, Margret A.
2013-01-01
The purpose of this study is to examine prospective mathematics specialists' engagement in an instructional sequence designed to elicit and develop their understandings of random processes. The study was conducted with two different sections of a probability and statistics course for K-8 teachers. Thirty-two teachers participated. Video analyses…
The Common Information of N Dependent Random Variables
Liu, Wei; Chen, Biao
2010-01-01
This paper generalizes Wyner's definition of common information of a pair of random variables to that of $N$ random variables. We prove coding theorems that show the same operational meanings for the common information of two random variables generalize to that of $N$ random variables. As a byproduct of our proof, we show that the Gray-Wyner source coding network can be generalized to $N$ source squences with $N$ decoders. We also establish a monotone property of Wyner's common information which is in contrast to other notions of the common information, specifically Shannon's mutual information and G\\'{a}cs and K\\"{o}rner's common randomness. Examples about the computation of Wyner's common information of $N$ random variables are also given.
New Results on the Sum of Two Generalized Gaussian Random Variables
Soury, Hamza
2016-01-06
We propose in this paper a new method to compute the characteristic function (CF) of generalized Gaussian (GG) random variable in terms of the Fox H function. The CF of the sum of two independent GG random variables is then deduced. Based on this results, the probability density function (PDF) and the cumulative distribution function (CDF) of the sum distribution are obtained. These functions are expressed in terms of the bivariate Fox H function. Next, the statistics of the distribution of the sum, such as the moments, the cumulant, and the kurtosis, are analyzed and computed. Due to the complexity of bivariate Fox H function, a solution to reduce such complexity is to approximate the sum of two independent GG random variables by one GG random variable with suitable shape factor. The approximation method depends on the utility of the system so three methods of estimate the shape factor are studied and presented [1].
New Results On the Sum of Two Generalized Gaussian Random Variables
Soury, Hamza
2015-01-01
We propose in this paper a new method to compute the characteristic function (CF) of generalized Gaussian (GG) random variable in terms of the Fox H function. The CF of the sum of two independent GG random variables is then deduced. Based on this results, the probability density function (PDF) and the cumulative distribution function (CDF) of the sum distribution are obtained. These functions are expressed in terms of the bivariate Fox H function. Next, the statistics of the distribution of the sum, such as the moments, the cumulant, and the kurtosis, are analyzed and computed. Due to the complexity of bivariate Fox H function, a solution to reduce such complexity is to approximate the sum of two independent GG random variables by one GG random variable with suitable shape factor. The approximation method depends on the utility of the system so three methods of estimate the shape factor are studied and presented.
Directory of Open Access Journals (Sweden)
Erden S.
2016-12-01
Full Text Available We establish generalized pre-Grüss inequality for local fractional integrals. Then, we obtain some inequalities involving generalized expectation, p−moment, variance and cumulative distribution function of random variable whose probability density function is bounded. Finally, some applications for generalized Ostrowski-Grüss inequality in numerical integration are given.
Fuzzy random variables — I. definitions and theorems
Kwakernaak, H.
1978-01-01
Fuzziness is discussed in the context of multivalued logic, and a corresponding view of fuzzy sets is given. Fuzzy random variables are introduced as random variables whose values are not real but fuzzy numbers, and subsequently redefined as a particular kind of fuzzy set. Expectations of fuzzy
On complete moment convergence for nonstationary negatively associated random variables
Directory of Open Access Journals (Sweden)
Mi-Hwa Ko
2016-05-01
Full Text Available Abstract The purpose of this paper is to establish the complete moment convergence for nonstationary negatively associated random variables satisfying the weak mean domination condition. The result is an improvement of complete convergence in Marcinkiewicz-Zygmund-type SLLN for negatively associated random variables in Kuczmaszewska (Acta Math. Hung. 128:116-130, 2010.
Characterizations of Distributions of Ratios of Certain Independent Random Variables
Directory of Open Access Journals (Sweden)
Hamedani G.G.
2013-05-01
Full Text Available Various characterizations of the distributions of the ratio of two independent gamma and exponential random variables as well as that of two independent Weibull random variables are presented. These characterizations are based, on a simple relationship between two truncated moments ; on hazard function ; and on functions of order statistics.
On the product and ratio of Bessel random variables
Directory of Open Access Journals (Sweden)
Saralees Nadarajah
2005-01-01
Full Text Available The distributions of products and ratios of random variables are of interest in many areas of the sciences. In this paper, the exact distributions of the product |XY| and the ratio |X/Y| are derived when X and Y are independent Bessel function random variables. An application of the results is provided by tabulating the associated percentage points.
Fortran code for generating random probability vectors, unitaries, and quantum states
Directory of Open Access Journals (Sweden)
Jonas eMaziero
2016-03-01
Full Text Available The usefulness of generating random configurations is recognized in many areas of knowledge. Fortran was born for scientific computing and has been one of the main programming languages in this area since then. And several ongoing projects targeting towards its betterment indicate that it will keep this status in the decades to come. In this article, we describe Fortran codes produced, or organized, for the generation of the following random objects: numbers, probability vectors, unitary matrices, and quantum state vectors and density matrices. Some matrix functions are also included and may be of independent interest.
PaCAL: A Python Package for Arithmetic Computations with Random Variables
Directory of Open Access Journals (Sweden)
Marcin Korze?
2014-05-01
Full Text Available In this paper we present PaCAL, a Python package for arithmetical computations on random variables. The package is capable of performing the four arithmetic operations: addition, subtraction, multiplication and division, as well as computing many standard functions of random variables. Summary statistics, random number generation, plots, and histograms of the resulting distributions can easily be obtained and distribution parameter ?tting is also available. The operations are performed numerically and their results interpolated allowing for arbitrary arithmetic operations on random variables following practically any probability distribution encountered in practice. The package is easy to use, as operations on random variables are performed just as they are on standard Python variables. Independence of random variables is, by default, assumed on each step but some computations on dependent random variables are also possible. We demonstrate on several examples that the results are very accurate, often close to machine precision. Practical applications include statistics, physical measurements or estimation of error distributions in scienti?c computations.
Directory of Open Access Journals (Sweden)
Julio Michael Stern
2011-04-01
Full Text Available This article analyzes the role of entropy in Bayesian statistics, focusing on its use as a tool for detection, recognition and validation of eigen-solutions. “Objects as eigen-solutions” is a key metaphor of the cognitive constructivism epistemological framework developed by the philosopher Heinz von Foerster. Special attention is given to some objections to the concepts of probability, statistics and randomization posed by George Spencer-Brown, a figure of great influence in the field of radical constructivism.
Directory of Open Access Journals (Sweden)
Mohamed Younes
Full Text Available Nearly 50% of the horses participating in endurance events are eliminated at a veterinary examination (a vet gate. Detecting unfit horses before a health problem occurs and treatment is required is a challenge for veterinarians but is essential for improving equine welfare. We hypothesized that it would be possible to detect unfit horses earlier in the event by measuring heart rate recovery variables. Hence, the objective of the present study was to compute logistic regressions of heart rate, cardiac recovery time and average speed data recorded at the previous vet gate (n-1 and thus predict the probability of elimination during successive phases (n and following in endurance events. Speed and heart rate data were extracted from an electronic database of endurance events (80-160 km in length organized in four countries. Overall, 39% of the horses that started an event were eliminated--mostly due to lameness (64% or metabolic disorders (15%. For each vet gate, logistic regressions of explanatory variables (average speed, cardiac recovery time and heart rate measured at the previous vet gate and categorical variables (age and/or event distance were computed to estimate the probability of elimination. The predictive logistic regressions for vet gates 2 to 5 correctly classified between 62% and 86% of the eliminated horses. The robustness of these results was confirmed by high areas under the receiving operating characteristic curves (0.68-0.84. Overall, a horse has a 70% chance of being eliminated at the next gate if its cardiac recovery time is longer than 11 min at vet gate 1 or 2, or longer than 13 min at vet gates 3 or 4. Heart rate recovery and average speed variables measured at the previous vet gate(s enabled us to predict elimination at the following vet gate. These variables should be checked at each veterinary examination, in order to detect unfit horses as early as possible. Our predictive method may help to improve equine welfare and
Exponential Inequalities for Positively Associated Random Variables and Applications
Directory of Open Access Journals (Sweden)
Yang Shanchao
2008-01-01
Full Text Available Abstract We establish some exponential inequalities for positively associated random variables without the boundedness assumption. These inequalities improve the corresponding results obtained by Oliveira (2005. By one of the inequalities, we obtain the convergence rate for the case of geometrically decreasing covariances, which closes to the optimal achievable convergence rate for independent random variables under the Hartman-Wintner law of the iterated logarithm and improves the convergence rate derived by Oliveira (2005 for the above case.
Zhan, Y; Giorgetti, L; Tiana, G
2016-09-01
Random heteropolymers are a minimal description of biopolymers and can provide a theoretical framework to the investigate the formation of loops in biophysical experiments. The looping probability as a function of polymer length was observed to display in some biopolymers, like chromosomes in cell nuclei or long RNA chains, anomalous scaling exponents. Combining a two-state model with self-adjusting simulated-tempering calculations, we calculate numerically the looping properties of several realizations of the random interactions within the chain. We find a continuous set of exponents upon varying the temperature, which arises from finite-size effects and is amplified by the disorder of the interactions. We suggest that this could provide a simple explanation for the anomalous scaling exponents found in experiments. In addition, our results have important implications notably for the study of chromosome folding as they show that scaling exponents cannot be the sole criteria for testing hypothesis-driven models of chromosome architecture.
Cerniglia, M. C.; Douglass, A. R.; Rood, R. B.; Sparling, L. C.; Nielsen, J. E.
1999-01-01
We present a study of the distribution of ozone in the lowermost stratosphere with the goal of understanding the relative contribution to the observations of air of either distinctly tropospheric or stratospheric origin. The air in the lowermost stratosphere is divided into two population groups based on Ertel's potential vorticity at 300 hPa. High [low] potential vorticity at 300 hPa suggests that the tropopause is low [high], and the identification of the two groups helps to account for dynamic variability. Conditional probability distribution functions are used to define the statistics of the mix from both observations and model simulations. Two data sources are chosen. First, several years of ozonesonde observations are used to exploit the high vertical resolution. Second, observations made by the Halogen Occultation Experiment [HALOE) on the Upper Atmosphere Research Satellite [UARS] are used to understand the impact on the results of the spatial limitations of the ozonesonde network. The conditional probability distribution functions are calculated at a series of potential temperature surfaces spanning the domain from the midlatitude tropopause to surfaces higher than the mean tropical tropopause [approximately 380K]. Despite the differences in spatial and temporal sampling, the probability distribution functions are similar for the two data sources. Comparisons with the model demonstrate that the model maintains a mix of air in the lowermost stratosphere similar to the observations. The model also simulates a realistic annual cycle. By using the model, possible mechanisms for the maintenance of mix of air in the lowermost stratosphere are revealed. The relevance of the results to the assessment of the environmental impact of aircraft effluence is discussed.
Randomly weighted sums of subexponential random variables with application to ruin theory
Tang, Q.; Tsitsiashvili, G.
2003-01-01
Let {X k , 1 k n} be n independent and real-valued random variables with common subexponential distribution function, and let {k, 1 k n} be other n random variables independent of {X k , 1 k n} and satisfying a k b for some 0 < a b < for all 1 k n. This paper proves that the asymptotic relations P
Murdock, John E; Petraco, Nicholas D K; Thornton, John I; Neel, Michael T; Weller, Todd J; Thompson, Robert M; Hamby, James E; Collins, Eric R
2017-05-01
The field of firearms and toolmark analysis has encountered deep scrutiny of late, stemming from a handful of voices, primarily in the law and statistical communities. While strong scrutiny is a healthy and necessary part of any scientific endeavor, much of the current criticism leveled at firearm and toolmark analysis is, at best, misinformed and, at worst, punditry. One of the most persistent criticisms stems from the view that as the field lacks quantified random match probability data (or at least a firm statistical model) with which to calculate the probability of a false match, all expert testimony concerning firearm and toolmark identification or source attribution is unreliable and should be ruled inadmissible. However, this critique does not stem from the hard work of actually obtaining data and performing the scientific research required to support or reject current findings in the literature. Although there are sound reasons (described herein) why there is currently no unifying probabilistic model for the comparison of striated and impressed toolmarks as there is in the field of forensic DNA profiling, much statistical research has been, and continues to be, done to aid the criminal justice system. This research has thus far shown that error rate estimates for the field are very low, especially when compared to other forms of judicial error. The first purpose of this paper is to point out the logical fallacies in the arguments of a small group of pundits, who advocate a particular viewpoint but cloak it as fact and research. The second purpose is to give a balanced review of the literature regarding random match probability models and statistical applications that have been carried out in forensic firearm and toolmark analysis. © 2017 American Academy of Forensic Sciences.
Couso, Inés; Sánchez, Luciano
2014-01-01
This short book provides a unified view of the history and theory of random sets and fuzzy random variables, with special emphasis on its use for representing higher-order non-statistical uncertainty about statistical experiments. The authors lay bare the existence of two streams of works using the same mathematical ground, but differing form their use of sets, according to whether they represent objects of interest naturally taking the form of sets, or imprecise knowledge about such objects. Random (fuzzy) sets can be used in many fields ranging from mathematical morphology, economics, artificial intelligence, information processing and statistics per se, especially in areas where the outcomes of random experiments cannot be observed with full precision. This book also emphasizes the link between random sets and fuzzy sets with some techniques related to the theory of imprecise probabilities. This small book is intended for graduate and doctoral students in mathematics or engineering, but also provides an i...
Directory of Open Access Journals (Sweden)
Hideki Katagiri
2017-10-01
Full Text Available This paper considers linear programming problems (LPPs where the objective functions involve discrete fuzzy random variables (fuzzy set-valued discrete random variables. New decision making models, which are useful in fuzzy stochastic environments, are proposed based on both possibility theory and probability theory. In multi-objective cases, Pareto optimal solutions of the proposed models are newly defined. Computational algorithms for obtaining the Pareto optimal solutions of the proposed models are provided. It is shown that problems involving discrete fuzzy random variables can be transformed into deterministic nonlinear mathematical programming problems which can be solved through a conventional mathematical programming solver under practically reasonable assumptions. A numerical example of agriculture production problems is given to demonstrate the applicability of the proposed models to real-world problems in fuzzy stochastic environments.
Frič, Roman; Papčo, Martin
2017-12-01
Stressing a categorical approach, we continue our study of fuzzified domains of probability, in which classical random events are replaced by measurable fuzzy random events. In operational probability theory (S. Bugajski) classical random variables are replaced by statistical maps (generalized distribution maps induced by random variables) and in fuzzy probability theory (S. Gudder) the central role is played by observables (maps between probability domains). We show that to each of the two generalized probability theories there corresponds a suitable category and the two resulting categories are dually equivalent. Statistical maps and observables become morphisms. A statistical map can send a degenerated (pure) state to a non-degenerated one —a quantum phenomenon and, dually, an observable can map a crisp random event to a genuine fuzzy random event —a fuzzy phenomenon. The dual equivalence means that the operational probability theory and the fuzzy probability theory coincide and the resulting generalized probability theory has two dual aspects: quantum and fuzzy. We close with some notes on products and coproducts in the dual categories.
Frič, Roman; Papčo, Martin
2017-06-01
Stressing a categorical approach, we continue our study of fuzzified domains of probability, in which classical random events are replaced by measurable fuzzy random events. In operational probability theory (S. Bugajski) classical random variables are replaced by statistical maps (generalized distribution maps induced by random variables) and in fuzzy probability theory (S. Gudder) the central role is played by observables (maps between probability domains). We show that to each of the two generalized probability theories there corresponds a suitable category and the two resulting categories are dually equivalent. Statistical maps and observables become morphisms. A statistical map can send a degenerated (pure) state to a non-degenerated one —a quantum phenomenon and, dually, an observable can map a crisp random event to a genuine fuzzy random event —a fuzzy phenomenon. The dual equivalence means that the operational probability theory and the fuzzy probability theory coincide and the resulting generalized probability theory has two dual aspects: quantum and fuzzy. We close with some notes on products and coproducts in the dual categories.
Directory of Open Access Journals (Sweden)
Alexander Kramida
2014-04-01
Full Text Available This paper suggests a method of evaluation of uncertainties in calculated transition probabilities by randomly varying parameters of an atomic code and comparing the results. A control code has been written to randomly vary the input parameters with a normal statistical distribution around initial values with a certain standard deviation. For this particular implementation, Cowan’s suite of atomic codes (R.D. Cowan, The Theory of Atomic Structure and Spectra, Berkeley, CA: University of California Press, 1981 was used to calculate radiative rates of magnetic-dipole and electric-quadrupole transitions within the ground configuration of titanium-like iron, Fe V. The Slater parameters used in the calculations were adjusted to fit experimental energy levels with Cowan’s least-squares fitting program, RCE. The standard deviations of the fitted parameters were used as input of the control code providing the distribution widths of random trials for these parameters. Propagation of errors through the matrix diagonalization and summation of basis state expansions leads to significant variations in the resulting transition rates. These variations vastly differ in their magnitude for different transitions, depending on their sensitivity to errors in parameters. With this method, the rate uncertainty can be individually assessed for each calculated transition.
Grimmett, Geoffrey
2014-01-01
Probability is an area of mathematics of tremendous contemporary importance across all aspects of human endeavour. This book is a compact account of the basic features of probability and random processes at the level of first and second year mathematics undergraduates and Masters' students in cognate fields. It is suitable for a first course in probability, plus a follow-up course in random processes including Markov chains. A special feature is the authors' attention to rigorous mathematics: not everything is rigorous, but the need for rigour is explained at difficult junctures. The text is enriched by simple exercises, together with problems (with very brief hints) many of which are taken from final examinations at Cambridge and Oxford. The first eight chapters form a course in basic probability, being an account of events, random variables, and distributions - discrete and continuous random variables are treated separately - together with simple versions of the law of large numbers and the central limit th...
An analysis of noise reduction in variable reluctance motors using pulse position randomization
Smoot, Melissa C.
1994-05-01
The design and implementation of a control system to introduce randomization into the control of a variable reluctance motor (VRM) is presented. The goal is to reduce noise generated by radial vibrations of the stator. Motor phase commutation angles are dithered by 1 or 2 mechanical degrees to investigate the effect of randomization on acoustic noise. VRM commutation points are varied using a uniform probability density function and a 4 state Markov chain among other methods. The theory of VRM and inverter operation and a derivation of the major source of acoustic noise are developed. The experimental results show the effects of randomization. Uniform dithering and Markov chain dithering both tend to spread the noise spectrum, reducing peak noise components. No clear evidence is found to determine which is the optimum randomization scheme. The benefit of commutation angle randomization in reducing VRM loudness as perceived by humans is found to be questionable.
Energy Technology Data Exchange (ETDEWEB)
Kwok Sau Fa [Departamento de Fisica, Universidade Estadual de Maringa, Av. Colombo 5790, 87020-900 Maringa-PR (Brazil); Joni Fat, E-mail: kwok@dfi.uem.br [Jurusan Teknik Elektro-Fakultas Teknik, Universitas Tarumanagara, Jl. Let. Jend. S. Parman 1, Blok L, Lantai 3 Grogol, Jakarta 11440 (Indonesia)
2011-10-15
We consider the decoupled continuous-time random walk model with a finite characteristic waiting time and approximate jump length variance. We take the waiting time probability density function (PDF) given by a combination of the exponential and the Mittag-Leffler function. Using this waiting time PDF, we investigate the diffusion behavior for all times. We obtain exact solutions for the first two moments and the PDF for the force-free and linear force cases. Due to the finite characteristic waiting time and jump length variance, the model presents, for the force-free case, normal diffusive behavior in the long-time limit. Further, the model can describe anomalous behavior at intermediate times.
rft1d: Smooth One-Dimensional Random Field Upcrossing Probabilities in Python
Directory of Open Access Journals (Sweden)
Todd C. Pataky
2016-07-01
Full Text Available Through topological expectations regarding smooth, thresholded n-dimensional Gaussian continua, random field theory (RFT describes probabilities associated with both the field-wide maximum and threshold-surviving upcrossing geometry. A key application of RFT is a correction for multiple comparisons which affords field-level hypothesis testing for both univariate and multivariate fields. For unbroken isotropic fields just one parameter in addition to the mean and variance is required: the ratio of a field's size to its smoothness. Ironically the simplest manifestation of RFT (1D unbroken fields has rarely surfaced in the literature, even during its foundational development in the late 1970s. This Python package implements 1D RFT primarily for exploring and validating RFT expectations, but also describes how it can be applied to yield statistical inferences regarding sets of experimental 1D fields.
Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu
2017-12-01
Despite the development of numerous predictive microbial inactivation models, a model focusing on the variability in time to inactivation for a bacterial population has not been developed. Additionally, an appropriate estimation of the risk of there being any remaining bacterial survivors in foods after the application of an inactivation treatment has not yet been established. Here, Gamma distribution, as a representative probability distribution, was used to estimate the variability in time to inactivation for a bacterial population. Salmonella enterica serotype Typhimurium was evaluated for survival in a low relative humidity environment. We prepared bacterial cells with an initial concentration that was adjusted to 2 × 10n colony-forming units/2 μl (n = 1, 2, 3, 4, 5) by performing a serial 10-fold dilution, and then we placed 2 μl of the inocula into each well of 96-well microplates. The microplates were stored in a desiccated environment at 10-20% relative humidity at 5, 15, or 25 °C. The survival or death of bacterial cells for each well in the 96-well microplate was confirmed by adding tryptic soy broth as an enrichment culture. The changes in the death probability of the 96 replicated bacterial populations were described as a cumulative Gamma distribution. The variability in time to inactivation was described by transforming the cumulative Gamma distribution into a Gamma distribution. We further examined the bacterial inactivation on almond kernels and radish sprout seeds. Additionally, we described certainty levels of bacterial inactivation that ensure the death probability of a bacterial population at six decimal reduction levels, ranging from 90 to 99.9999%. Consequently, the probability model developed in the present study enables us to estimate the death probability of bacterial populations in a desiccated environment over time. This probability model may be useful for risk assessment to estimate the amount of remaining bacteria in a given
Energy Technology Data Exchange (ETDEWEB)
Hannachi, A. [University of Reading, Department of Meteorology, Earley Gate, PO Box 243, Reading (United Kingdom)
2006-08-15
Robust tools are presented in this manuscript to assess changes in probability density function (pdf) of climate variables. The approach is based on order statistics and aims at computing, along with their standard errors, changes in various quantiles and related statistics. The technique, which is nonparametric and simple to compute, is developed for both independent and dependent data. For autocorrelated data, serial correlation is addressed via Monte Carlo simulations using various autoregressive models. The ratio between the average standard errors, over several quantiles, of quantile estimates for correlated and independent data, is then computed. A simple scaling-law type relationship is found to hold between this ratio and the lag-1 autocorrelation. The approach has been applied to winter monthly Central England Temperature (CET) and North Atlantic Oscillation (NAO) time series from 1659 to 1999 to assess/quantify changes in various parameters of their pdf. For the CET, significant changes in median (or scale) and also in low and high quantiles are observed between various time slices, in particular between the pre- and post-industrial revolution. Observed changes in spread and also quartile skewness of the pdf, however, are not statistically significant (at 95% confidence level). For the NAO index we find mainly large significant changes in variance (or scale), yielding significant changes in low/high quantiles. Finally, the performance of the method compared to few conventional approaches is discussed. (orig.)
Ironside, Kirsten E.; Mattson, David J.; Choate, David; Stoner, David; Arundel, Terry; Hansen, Jered R.; Theimer, Tad; Holton, Brandon; Jansen, Brian; Sexton, Joseph O.; Longshore, Kathleen M.; Edwards, Thomas C.; Peters, Michael
2017-01-01
Studies using global positioning system (GPS) telemetry rarely result in 100% fix success rates (FSR), which may bias datasets because data loss is systematic rather than a random process. Previous spatially explicit models developed to correct for sampling bias have been limited to small study areas, a small range of data loss, or were study-area specific. We modeled environmental effects on FSR from desert to alpine biomes, investigated the full range of potential data loss (0–100% FSR), and evaluated whether animal body position can contribute to lower FSR because of changes in antenna orientation based on GPS detection rates for 4 focal species: cougars (Puma concolor), desert bighorn sheep (Ovis canadensis nelsoni), Rocky Mountain elk (Cervus elaphus nelsoni), and mule deer (Odocoileus hemionus). Terrain exposure and height of over story vegetation were the most influential factors affecting FSR. Model evaluation showed a strong correlation (0.88) between observed and predicted FSR and no significant differences between predicted and observed FSRs using 2 independent validation datasets. We found that cougars and canyon-dwelling bighorn sheep may select for environmental features that influence their detectability by GPS technology, mule deer may select against these features, and elk appear to be nonselective. We observed temporal patterns in missed fixes only for cougars. We provide a model for cougars, predicting fix success by time of day that is likely due to circadian changes in collar orientation and selection of daybed sites. We also provide a model predicting the probability of GPS fix acquisitions given environmental conditions, which had a strong relationship (r 2 = 0.82) with deployed collar FSRs across species.
Study on genetic variability of Cassidula aurisfelis (snail) by random ...
African Journals Online (AJOL)
The genetic variability among individuals of Cassidula aurisfelis from Setiu Wetland, Terengganu Darul Iman was examined by using the random amplified polymorphic DNA (RAPD) technique. Ten oligonucleotide primers were screened and three primers were selected (OPA 02, OPA 04 and OPA 10) to amplify DNA from ...
Variability in response to albuminuria lowering drugs : true or random?
Petrykiv, Sergei I.; de Zeeuw, Dick; Persson, Frederik; Rossing, Peter; Gansevoort, Ron T.; Laverman, Gozewijn D.; Heerspink, Hiddo J. L.
AIMS Albuminuria-lowering drugs have shown different effect size in different individuals. Since urine albumin levels are known to vary considerably from day- to-day, we questioned whether the between-individual variability in albuminuria response after therapy initiation reflects a random
How a dependent's variable non-randomness affects taper equation ...
African Journals Online (AJOL)
Regression results, for the two methods, were compared using the confidence interval estimates for the regression coefficients, the multicollinearity tests and Fit Index (FI) values as criteria. The comparison of results showed that randomness of the dependent variable (second method) did not improve the estimates, in any of ...
Study on genetic variability of Cassidula aurisfelis (snail) by random ...
African Journals Online (AJOL)
PRECIOUS
2009-11-16
Nov 16, 2009 ... genetic variability is Random Amplified Polymorphic. DNAs (RAPD) (Williams et al., 1990). The technique requires no prior knowledge of the genome and it needs ... quantity of DNA was measured by obtaining the absorbance read- ... 1994) and Numerical taxonomy and Multivariate Analysis System.
An infinite-dimensional weak KAM theory via random variables
Gomes, Diogo A.
2016-08-31
We develop several aspects of the infinite-dimensional Weak KAM theory using a random variables\\' approach. We prove that the infinite-dimensional cell problem admits a viscosity solution that is a fixed point of the Lax-Oleinik semigroup. Furthermore, we show the existence of invariant minimizing measures and calibrated curves defined on R.
A review of instrumental variable estimators for Mendelian randomization.
Burgess, Stephen; Small, Dylan S; Thompson, Simon G
2017-10-01
Instrumental variable analysis is an approach for obtaining causal inferences on the effect of an exposure (risk factor) on an outcome from observational data. It has gained in popularity over the past decade with the use of genetic variants as instrumental variables, known as Mendelian randomization. An instrumental variable is associated with the exposure, but not associated with any confounder of the exposure-outcome association, nor is there any causal pathway from the instrumental variable to the outcome other than via the exposure. Under the assumption that a single instrumental variable or a set of instrumental variables for the exposure is available, the causal effect of the exposure on the outcome can be estimated. There are several methods available for instrumental variable estimation; we consider the ratio method, two-stage methods, likelihood-based methods, and semi-parametric methods. Techniques for obtaining statistical inferences and confidence intervals are presented. The statistical properties of estimates from these methods are compared, and practical advice is given about choosing a suitable analysis method. In particular, bias and coverage properties of estimators are considered, especially with weak instruments. Settings particularly relevant to Mendelian randomization are prioritized in the paper, notably the scenario of a continuous exposure and a continuous or binary outcome.
Estimating The Probability Of Achieving Shortleaf Pine Regeneration At Variable Specified Levels
Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin
2002-01-01
A model was developed that can be used to estimate the probability of achieving regeneration at a variety of specified stem density levels. The model was fitted to shortleaf pine (Pinus echinata Mill.) regeneration data, and can be used to estimate the probability of achieving desired levels of regeneration between 300 and 700 stems per acre 9-l 0...
DEFF Research Database (Denmark)
Yura, Harold; Hanson, Steen Grüner
2012-01-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the......Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set...... with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative...
Instrumental variable analyses. Exploiting natural randomness to understand causal mechanisms.
Iwashyna, Theodore J; Kennedy, Edward H
2013-06-01
Instrumental variable analysis is a technique commonly used in the social sciences to provide evidence that a treatment causes an outcome, as contrasted with evidence that a treatment is merely associated with differences in an outcome. To extract such strong evidence from observational data, instrumental variable analysis exploits situations where some degree of randomness affects how patients are selected for a treatment. An instrumental variable is a characteristic of the world that leads some people to be more likely to get the specific treatment we want to study but does not otherwise change those patients' outcomes. This seminar explains, in nonmathematical language, the logic behind instrumental variable analyses, including several examples. It also provides three key questions that readers of instrumental variable analyses should ask to evaluate the quality of the evidence. (1) Does the instrumental variable lead to meaningful differences in the treatment being tested? (2) Other than through the specific treatment being tested, is there any other way the instrumental variable could influence the outcome? (3) Does anything cause patients to both receive the instrumental variable and receive the outcome?
Kuzmak, Sylvia
2016-01-01
Teaching probability and statistics is more than teaching the mathematics itself. Historically, the mathematics of probability and statistics was first developed through analyzing games of chance such as the rolling of dice. This article makes the case that the understanding of probability and statistics is dependent upon building a…
Nuclear data uncertainties: I, Basic concepts of probability
Energy Technology Data Exchange (ETDEWEB)
Smith, D.L.
1988-12-01
Some basic concepts of probability theory are presented from a nuclear-data perspective, in order to provide a foundation for thorough understanding of the role of uncertainties in nuclear data research. Topics included in this report are: events, event spaces, calculus of events, randomness, random variables, random-variable distributions, intuitive and axiomatic probability, calculus of probability, conditional probability and independence, probability distributions, binomial and multinomial probability, Poisson and interval probability, normal probability, the relationships existing between these probability laws, and Bayes' theorem. This treatment emphasizes the practical application of basic mathematical concepts to nuclear data research, and it includes numerous simple examples. 34 refs.
Introduction to probability theory with contemporary applications
Helms, Lester L
2010-01-01
This introduction to probability theory transforms a highly abstract subject into a series of coherent concepts. Its extensive discussions and clear examples, written in plain language, expose students to the rules and methods of probability. Suitable for an introductory probability course, this volume requires abstract and conceptual thinking skills and a background in calculus.Topics include classical probability, set theory, axioms, probability functions, random and independent random variables, expected values, and covariance and correlations. Additional subjects include stochastic process
Generation of correlated finite alphabet waveforms using gaussian random variables
Ahmed, Sajid
2016-01-13
Various examples of methods and systems are provided for generation of correlated finite alphabet waveforms using Gaussian random variables in, e.g., radar and communication applications. In one example, a method includes mapping an input signal comprising Gaussian random variables (RVs) onto finite-alphabet non-constant-envelope (FANCE) symbols using a predetermined mapping function, and transmitting FANCE waveforms through a uniform linear array of antenna elements to obtain a corresponding beampattern. The FANCE waveforms can be based upon the mapping of the Gaussian RVs onto the FANCE symbols. In another example, a system includes a memory unit that can store a plurality of digital bit streams corresponding to FANCE symbols and a front end unit that can transmit FANCE waveforms through a uniform linear array of antenna elements to obtain a corresponding beampattern. The system can include a processing unit that can encode the input signal and/or determine the mapping function.
Higher moments of Banach space valued random variables
Janson, Svante
2015-01-01
The authors define the k:th moment of a Banach space valued random variable as the expectation of its k:th tensor power; thus the moment (if it exists) is an element of a tensor power of the original Banach space. The authors study both the projective and injective tensor products, and their relation. Moreover, in order to be general and flexible, we study three different types of expectations: Bochner integrals, Pettis integrals and Dunford integrals.
The Sum and Difference of Two Lognormal Random Variables
Directory of Open Access Journals (Sweden)
C. F. Lo
2012-01-01
Full Text Available We have presented a new unified approach to model the dynamics of both the sum and difference of two correlated lognormal stochastic variables. By the Lie-Trotter operator splitting method, both the sum and difference are shown to follow a shifted lognormal stochastic process, and approximate probability distributions are determined in closed form. Illustrative numerical examples are presented to demonstrate the validity and accuracy of these approximate distributions. In terms of the approximate probability distributions, we have also obtained an analytical series expansion of the exact solutions, which can allow us to improve the approximation in a systematic manner. Moreover, we believe that this new approach can be extended to study both (1 the algebraic sum of N lognormals, and (2 the sum and difference of other correlated stochastic processes, for example, two correlated CEV processes, two correlated CIR processes, and two correlated lognormal processes with mean-reversion.
Hu, Zhen; Du, Xiaoping; Kolekar, Nitin S.; Banerjee, Arindam
2014-03-01
In robust design, uncertainty is commonly modelled with precise probability distributions. In reality, the distribution types and distribution parameters may not always be available owing to limited data. This research develops a robust design methodology to accommodate the mixture of both precise and imprecise random variables. By incorporating the Taguchi quality loss function and the minimax regret criterion, the methodology mitigates the effects of not only uncertain parameters but also uncertainties in the models of the uncertain parameters. Hydrokinetic turbine systems are a relatively new alternative energy technology, and both precise and imprecise random variables exist in the design of such systems. The developed methodology is applied to the robust design optimization of a hydrokinetic turbine system. The results demonstrate the effectiveness of the proposed methodology.
Non-Shannon Information Inequalities in Four Random Variables
Dougherty, Randall; Zeger, Kenneth
2011-01-01
Any unconstrained information inequality in three or fewer random variables can be written as a linear combination of instances of Shannon's inequality I(A;B|C) >= 0 . Such inequalities are sometimes referred to as "Shannon" inequalities. In 1998, Zhang and Yeung gave the first example of a "non-Shannon" information inequality in four variables. Their technique was to add two auxiliary variables with special properties and then apply Shannon inequalities to the enlarged list. Here we will show that the Zhang-Yeung inequality can actually be derived from just one auxiliary variable. Then we use their same basic technique of adding auxiliary variables to give many other non-Shannon inequalities in four variables. Our list includes the inequalities found by Xu, Wang, and Sun, but it is by no means exhaustive. Furthermore, some of the inequalities obtained may be superseded by stronger inequalities that have yet to be found. Indeed, we show that the Zhang-Yeung inequality is one of those that is superseded. We al...
Effects of population variability on the accuracy of detection probability estimates
DEFF Research Database (Denmark)
Ordonez Gloria, Alejandro
2011-01-01
Observing a constant fraction of the population over time, locations, or species is virtually impossible. Hence, quantifying this proportion (i.e. detection probability) is an important task in quantitative population ecology. In this study we determined, via computer simulations, the ef- fect of...
Directory of Open Access Journals (Sweden)
Singh Jagdev
2014-07-01
Full Text Available In this paper, we obtain the distribution of mixed sum of two independent random variables with different probability density functions. One with probability density function defined in finite range and the other with probability density function defined in infinite range and associated with product of Srivastava's polynomials and H-function. We use the Laplace transform and its inverse to obtain our main result. The result obtained here is quite general in nature and is capable of yielding a large number of corresponding new and known results merely by specializing the parameters involved therein. To illustrate, some special cases of our main result are also given.
Probability density functions for the variable solar wind near the solar cycle minimum
Vörös,; Leitner, M; Narita, Y; Consolini, G; Kovács, P; Tóth, A; Lichtenberger, J
2015-01-01
Unconditional and conditional statistics is used for studying the histograms of magnetic field multi-scale fluctuations in the solar wind near the solar cycle minimum in 2008. The unconditional statistics involves the magnetic data during the whole year 2008. The conditional statistics involves the magnetic field time series splitted into concatenated subsets of data according to a threshold in dynamic pressure. The threshold separates fast stream leading edge compressional and trailing edge uncompressional fluctuations. The histograms obtained from these data sets are associated with both large-scale (B) and small-scale ({\\delta}B) magnetic fluctuations, the latter corresponding to time-delayed differences. It is shown here that, by keeping flexibility but avoiding the unnecessary redundancy in modeling, the histograms can be effectively described by a limited set of theoretical probability distribution functions (PDFs), such as the normal, log-normal, kappa and logkappa functions. In a statistical sense the...
OPTIMAL ESTIMATION OF RANDOM PROCESSES ON THE CRITERION OF MAXIMUM A POSTERIORI PROBABILITY
Directory of Open Access Journals (Sweden)
A. A. Lobaty
2016-01-01
Full Text Available The problem of obtaining the equations for the a posteriori probability density of a stochastic Markov process with a linear measurement model. Unlike common approaches based on consideration as a criterion for optimization of the minimum mean square error of estimation, in this case, the optimization criterion is considered the maximum a posteriori probability density of the process being evaluated.The a priori probability density estimated Gaussian process originally considered a differentiable function that allows us to expand it in a Taylor series without use of intermediate transformations characteristic functions and harmonic decomposition. For small time intervals the probability density measurement error vector, by definition, as given by a Gaussian with zero expectation. This makes it possible to obtain a mathematical expression for the residual function, which characterizes the deviation of the actual measurement process from its mathematical model.To determine the optimal a posteriori estimation of the state vector is given by the assumption that this estimate is consistent with its expectation – the maximum a posteriori probability density. This makes it possible on the basis of Bayes’ formula for the a priori and a posteriori probability density of an equation Stratonovich-Kushner.Using equation Stratonovich-Kushner in different types and values of the vector of drift and diffusion matrix of a Markov stochastic process can solve a variety of filtration tasks, identify, smoothing and system status forecast for continuous and for discrete systems. Discrete continuous implementation of the developed algorithms posteriori assessment provides a specific, discrete algorithms for the implementation of the on-board computer, a mobile robot system.
Analysis of Secret Key Randomness Exploiting the Radio Channel Variability
Directory of Open Access Journals (Sweden)
Taghrid Mazloum
2015-01-01
Full Text Available A few years ago, physical layer based techniques have started to be considered as a way to improve security in wireless communications. A well known problem is the management of ciphering keys, both regarding the generation and distribution of these keys. A way to alleviate such difficulties is to use a common source of randomness for the legitimate terminals, not accessible to an eavesdropper. This is the case of the fading propagation channel, when exact or approximate reciprocity applies. Although this principle has been known for long, not so many works have evaluated the effect of radio channel properties in practical environments on the degree of randomness of the generated keys. To this end, we here investigate indoor radio channel measurements in different environments and settings at either 2.4625 GHz or 5.4 GHz band, of particular interest for WIFI related standards. Key bits are extracted by quantizing the complex channel coefficients and their randomness is evaluated using the NIST test suite. We then look at the impact of the carrier frequency, the channel variability in the space, time, and frequency degrees of freedom used to construct a long secret key, in relation to the nature of the radio environment such as the LOS/NLOS character.
Energy Technology Data Exchange (ETDEWEB)
Jumarie, Guy [Department of Mathematics, University of Quebec at Montreal, P.O. Box 8888, Downtown Station, Montreal, Qc, H3C 3P8 (Canada)], E-mail: jumarie.guy@uqam.ca
2009-05-15
A probability distribution of fractional (or fractal) order is defined by the measure {mu}{l_brace}dx{r_brace} = p(x)(dx){sup {alpha}}, 0 < {alpha} < 1. Combining this definition with the fractional Taylor's series f(x+h)=E{sub {alpha}}(D{sub x}{sup {alpha}}h{sup {alpha}})f(x) provided by the modified Riemann Liouville definition, one can expand a probability calculus parallel to the standard one. A Fourier's transform of fractional order using the Mittag-Leffler function is introduced, together with its inversion formula; and it provides a suitable generalization of the characteristic function of fractal random variables. It appears that the state moments of fractional order are more especially relevant. The main properties of this fractional probability calculus are outlined, it is shown that it provides a sound approach to Fokker-Planck equation which are fractional in both space and time, and it provides new results in the information theory of non-random functions.
On the probability of cost-effectiveness using data from randomized clinical trials
Directory of Open Access Journals (Sweden)
Willan Andrew R
2001-09-01
Full Text Available Abstract Background Acceptability curves have been proposed for quantifying the probability that a treatment under investigation in a clinical trial is cost-effective. Various definitions and estimation methods have been proposed. Loosely speaking, all the definitions, Bayesian or otherwise, relate to the probability that the treatment under consideration is cost-effective as a function of the value placed on a unit of effectiveness. These definitions are, in fact, expressions of the certainty with which the current evidence would lead us to believe that the treatment under consideration is cost-effective, and are dependent on the amount of evidence (i.e. sample size. Methods An alternative for quantifying the probability that the treatment under consideration is cost-effective, which is independent of sample size, is proposed. Results Non-parametric methods are given for point and interval estimation. In addition, these methods provide a non-parametric estimator and confidence interval for the incremental cost-effectiveness ratio. An example is provided. Conclusions The proposed parameter for quantifying the probability that a new therapy is cost-effective is superior to the acceptability curve because it is not sample size dependent and because it can be interpreted as the proportion of patients who would benefit if given the new therapy. Non-parametric methods are used to estimate the parameter and its variance, providing the appropriate confidence intervals and test of hypothesis.
DEFF Research Database (Denmark)
Falk, Anne Katrine Vinther; Gryning, Sven-Erik
1997-01-01
In this model for atmospheric dispersion particles are simulated by the Langevin Equation, which is a stochastic differential equation. It uses the probability density function (PDF) of the vertical velocity fluctuations as input. The PDF is constructed as an expansion after Hermite polynomials. ...
PItcHPERFeCT: Primary Intracranial Hemorrhage Probability Estimation using Random Forests on CT
Directory of Open Access Journals (Sweden)
John Muschelli
2017-01-01
Results: All results presented are for the 102 scans in the validation set. The median DSI for each model was: 0.89 (logistic, 0.885 (LASSO, 0.88 (GAM, and 0.899 (random forest. Using the random forest results in a slightly higher median DSI compared to the other models. After Bonferroni correction, the hypothesis of equality of median DSI was rejected only when comparing the random forest DSI to the DSI from the logistic (p < 0.001, LASSO (p < 0.001, or GAM (p < 0.001 models. In practical terms the difference between the random forest and the logistic regression is quite small. The correlation (95% CI between the volume from manual segmentation and the predicted volume was 0.93 (0.9,0.95 for the random forest model. These results indicate that random forest approach can achieve accurate segmentation of ICH in a population of patients from a variety of imaging centers. We provide an R package (https://github.com/muschellij2/ichseg and a Shiny R application online (http://johnmuschelli.com/ich_segment_all.html for implementing and testing the proposed approach.
Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H
2017-07-01
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in
Probability density functions for the variable solar wind near the solar cycle minimum
Vörös, Z.; Leitner, M.; Narita, Y.; Consolini, G.; Kovács, P.; Tóth, A.; Lichtenberger, J.
2015-08-01
Unconditional and conditional statistics are used for studying the histograms of magnetic field multiscale fluctuations in the solar wind near the solar cycle minimum in 2008. The unconditional statistics involves the magnetic data during the whole year in 2008. The conditional statistics involves the magnetic field time series split into concatenated subsets of data according to a threshold in dynamic pressure. The threshold separates fast-stream leading edge compressional and trailing edge uncompressional fluctuations. The histograms obtained from these data sets are associated with both multiscale (B) and small-scale (δB) magnetic fluctuations, the latter corresponding to time-delayed differences. It is shown here that, by keeping flexibility but avoiding the unnecessary redundancy in modeling, the histograms can be effectively described by a limited set of theoretical probability distribution functions (PDFs), such as the normal, lognormal, kappa, and log-kappa functions. In a statistical sense the model PDFs correspond to additive and multiplicative processes exhibiting correlations. It is demonstrated here that the skewed small-scale histograms inherent in turbulent cascades are better described by the skewed log-kappa than by the symmetric kappa model. Nevertheless, the observed skewness is rather small, resulting in potential difficulties of estimation of the third-order moments. This paper also investigates the dependence of the statistical convergence of PDF model parameters, goodness of fit, and skewness on the data sample size. It is shown that the minimum lengths of data intervals required for the robust estimation of parameters is scale, process, and model dependent.
Galván De la Cruz, O O; Ballesteros-Zebadúa, P; Moreno-Jiménez, S; Celis, M A; García-Garduño, O A
2015-02-01
It is debatable whether pediatric patients diagnosed with arteriovenous malformations (AVMs) should be treated as adults. Several indexes to classify AVMs have been proposed in the literature, and most try to predict the outcome for each specific treatment. The indexes differ in the variables considered, but they are all based in adult populations. In this study, we analyzed the variables that influence the obliteration time and probability of occurrence in a Mexican pediatric population diagnosed with an AVM and treated with stereotactic radiosurgery (SRS). We analyzed 45 pediatric patients (eloquent region, there was a tendency of a shorter obliteration time (p=0.071). None of the previously proposed indexes for adults predict obliteration in this pediatric population. Treatment technique, eloquence and follow up time were the only variables that showed influence in obliteration. Since the highest probability of obliteration occurs during the first three years, if the nidus has not been obliterated after this time then another treatment option could be considered. Copyright © 2014 Elsevier B.V. All rights reserved.
Spectral shaping of a randomized PWM DC-DC converter using maximum entropy probability distributions
CSIR Research Space (South Africa)
Dove, Albert
2017-01-01
Full Text Available stream_source_info Dove_2018.pdf.txt stream_content_type text/plain stream_size 26566 Content-Encoding UTF-8 stream_name Dove_2018.pdf.txt Content-Type text/plain; charset=UTF-8 SPECTRAL SHAPING OF A RANDOMIZED PWM DC... behind spectral shaping is to select a randomization technique with its associated PDF to analytically obtain a specified spectral profile [21]. The benefits of this idea comes in being able to achieve some level of controllability on the spectral content...
Generating Correlated QPSK Waveforms By Exploiting Real Gaussian Random Variables
Jardak, Seifallah
2012-11-01
The design of waveforms with specified auto- and cross-correlation properties has a number of applications in multiple-input multiple-output (MIMO) radar, one of them is the desired transmit beampattern design. In this work, an algorithm is proposed to generate quadrature phase shift- keying (QPSK) waveforms with required cross-correlation properties using real Gaussian random-variables (RV’s). This work can be considered as the extension of what was presented in [1] to generate BPSK waveforms. This work will be extended for the generation of correlated higher-order phase shift-keying (PSK) and quadrature amplitude modulation (QAM) schemes that can better approximate the desired beampattern.
Broda, S.A.
2013-01-01
Countless test statistics can be written as quadratic forms in certain random vectors, or ratios thereof. Consequently, their distribution has received considerable attention in the literature. Except for a few special cases, no closed-form expression for the cdf exists, and one resorts to numerical
Is extrapair mating random? On the probability distribution of extrapair young in avian broods
Brommer, Jon E.; Korsten, Peter; Bouwman, Karen A.; Berg, Mathew L.; Komdeur, Jan
2007-01-01
A dichotomy in female extrapair copulation (EPC) behavior, with some females seeking EPC and others not, is inferred if the observed distribution of extrapair young (EPY) over broods differs from a random process on the level of individual offspring (binomial, hypergeometrical, or Poisson). A review
Cannon, Alex
2017-04-01
Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another - the N-dimensional probability density function transform - is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin
Simon, Marvin K
2006-01-01
This handbook, now available in paperback, brings together a comprehensive collection of mathematical material in one location. It also offers a variety of new results interpreted in a form that is particularly useful to engineers, scientists, and applied mathematicians. The handbook is not specific to fixed research areas, but rather it has a generic flavor that can be applied by anyone working with probabilistic and stochastic analysis and modeling. Classic results are presented in their final form without derivation or discussion, allowing for much material to be condensed into one volume. Concise compilation of disparate formulae saves time in searching different sources. Focused application has broad interest for many disciplines: engineers, computer scientists, statisticians, physicists; as well as for any researcher working in probabilistic and stochastic analysis and modeling in the natural or social sciences. The material is timeless, with intrinsic value to practicing engineers and scientists. Excer...
Directory of Open Access Journals (Sweden)
Nils Ternès
2017-05-01
Full Text Available Abstract Background Thanks to the advances in genomics and targeted treatments, more and more prediction models based on biomarkers are being developed to predict potential benefit from treatments in a randomized clinical trial. Despite the methodological framework for the development and validation of prediction models in a high-dimensional setting is getting more and more established, no clear guidance exists yet on how to estimate expected survival probabilities in a penalized model with biomarker-by-treatment interactions. Methods Based on a parsimonious biomarker selection in a penalized high-dimensional Cox model (lasso or adaptive lasso, we propose a unified framework to: estimate internally the predictive accuracy metrics of the developed model (using double cross-validation; estimate the individual survival probabilities at a given timepoint; construct confidence intervals thereof (analytical or bootstrap; and visualize them graphically (pointwise or smoothed with spline. We compared these strategies through a simulation study covering scenarios with or without biomarker effects. We applied the strategies to a large randomized phase III clinical trial that evaluated the effect of adding trastuzumab to chemotherapy in 1574 early breast cancer patients, for which the expression of 462 genes was measured. Results In our simulations, penalized regression models using the adaptive lasso estimated the survival probability of new patients with low bias and standard error; bootstrapped confidence intervals had empirical coverage probability close to the nominal level across very different scenarios. The double cross-validation performed on the training data set closely mimicked the predictive accuracy of the selected models in external validation data. We also propose a useful visual representation of the expected survival probabilities using splines. In the breast cancer trial, the adaptive lasso penalty selected a prediction model with 4
Chang, Alfred T. C.; Chiu, Long S.; Wilheit, Thomas T.
1993-01-01
Global averages and random errors associated with the monthly oceanic rain rates derived from the Special Sensor Microwave/Imager (SSM/I) data using the technique developed by Wilheit et al. (1991) are computed. Accounting for the beam-filling bias, a global annual average rain rate of 1.26 m is computed. The error estimation scheme is based on the existence of independent (morning and afternoon) estimates of the monthly mean. Calculations show overall random errors of about 50-60 percent for each 5 deg x 5 deg box. The results are insensitive to different sampling strategy (odd and even days of the month). Comparison of the SSM/I estimates with raingage data collected at the Pacific atoll stations showed a low bias of about 8 percent, a correlation of 0.7, and an rms difference of 55 percent.
Directory of Open Access Journals (Sweden)
Paul B. Slater
2015-01-01
Full Text Available Previously, a formula, incorporating a 5F4 hypergeometric function, for the Hilbert-Schmidt-averaged determinantal moments ρPTnρk/ρk of 4×4 density-matrices (ρ and their partial transposes (|ρPT|, was applied with k=0 to the generalized two-qubit separability probability question. The formula can, furthermore, be viewed, as we note here, as an averaging over “induced measures in the space of mixed quantum states.” The associated induced-measure separability probabilities (k=1,2,… are found—via a high-precision density approximation procedure—to assume interesting, relatively simple rational values in the two-re[al]bit (α=1/2, (standard two-qubit (α=1, and two-quater[nionic]bit (α=2 cases. We deduce rather simple companion (rebit, qubit, quaterbit, … formulas that successfully reproduce the rational values assumed for general k. These formulas are observed to share certain features, possibly allowing them to be incorporated into a single master formula.
Wang, Kezhi
2014-09-01
The sum of ratios of products of independent 2642 2642α-μ random variables (RVs) is approximated by using the Generalized Gamma ratio approximation (GGRA) with Gamma ratio approximation (GRA) as a special case. The proposed approximation is used to calculate the outage probability of the equal gain combining (EGC) or maximum ratio combining (MRC) receivers for wireless multihop relaying or multiple scattering systems considering interferences. Numerical results show that the newly derived approximation works very well verified by the simulation, while GRA has a slightly worse performance than GGRA when outage probability is below 0.1 but with a more simplified form.
Keyser, Alisa; Westerling, Anthony LeRoy
2017-05-01
A long history of fire suppression in the western United States has significantly changed forest structure and ecological function, leading to increasingly uncharacteristic fires in terms of size and severity. Prior analyses of fire severity in California forests showed that time since last fire and fire weather conditions predicted fire severity very well, while a larger regional analysis showed that topography and climate were important predictors of high severity fire. There has not yet been a large-scale study that incorporates topography, vegetation and fire-year climate to determine regional scale high severity fire occurrence. We developed models to predict the probability of high severity fire occurrence for the western US. We predict high severity fire occurrence with some accuracy, and identify the relative importance of predictor classes in determining the probability of high severity fire. The inclusion of both vegetation and fire-year climate predictors was critical for model skill in identifying fires with high fractional fire severity. The inclusion of fire-year climate variables allows this model to forecast inter-annual variability in areas at future risk of high severity fire, beyond what slower-changing fuel conditions alone can accomplish. This allows for more targeted land management, including resource allocation for fuels reduction treatments to decrease the risk of high severity fire.
Chen, M.; Kumar, A.
2013-12-01
In prediction of atmospheric seasonal mean climate variability, the signal-to-noise ratio provides a classic measure of predictability. The signal component is the atmospheric response to the slowly evolving boundary conditions such as ENSO SST or from smaller spread from initial conditions for shorter lead forecasts in an initialized prediction system. The noise component results from the internally generated variability of atmospheric states around the ensemble mean. The high signal-to-noise ratio leads to high prediction skill and predictability. Statistically, the signal can be quantified by the mean shift of the atmospheric states to its climatology, the noise by the spread of the probability distribution function (PDF) of atmospheric variability, and the predictability by the relative displacement of the PDF for the atmospheric variable from its climatological distribution. Therefore, it is essential for understanding the predictability to know if there is change and how significant the change is in the PDF spread of atmospheric variable due to changes in external forcing (e.g., ENSO SST; CO2 etc.) through the years. These issues are the focus of this study. Specifically, by using 31 years (1982-2012) seasonal hindcast data from NCEP Climate Forecast Systems version2 (CFSv2), we analyzed the variations of the PDF spread for the seasonal variability of precipitation and near surface land temperature associated with changes in external forcing. The results include (1) its year to year variations for target seasons; (2) its seasonality and geographic dependence; and (3) its lead time dependence in the initialized prediction system.
Instrumental variables and Mendelian randomization with invalid instruments
Kang, Hyunseung
Instrumental variables (IV) methods have been widely used to determine the causal effect of a treatment, exposure, policy, or an intervention on an outcome of interest. The IV method relies on having a valid instrument, a variable that is (A1) associated with the exposure, (A2) has no direct effect on the outcome, and (A3) is unrelated to the unmeasured confounders associated with the exposure and the outcome. However, in practice, finding a valid instrument, especially those that satisfy (A2) and (A3), can be challenging. For example, in Mendelian randomization studies where genetic markers are used as instruments, complete knowledge about instruments' validity is equivalent to complete knowledge about the involved genes' functions. The dissertation explores the theory, methods, and application of IV methods when invalid instruments are present. First, when we have multiple candidate instruments, we establish a theoretical bound whereby causal effects are only identified as long as less than 50% of instruments are invalid, without knowing which of the instruments are invalid. We also propose a fast penalized method, called sisVIVE, to estimate the causal effect. We find that sisVIVE outperforms traditional IV methods when invalid instruments are present both in simulation studies as well as in real data analysis. Second, we propose a robust confidence interval under the multiple invalid IV setting. This work is an extension of our work on sisVIVE. However, unlike sisVIVE which is robust to violations of (A2) and (A3), our confidence interval procedure provides honest coverage even if all three assumptions, (A1)-(A3), are violated. Third, we study the single IV setting where the one IV we have may actually be invalid. We propose a nonparametric IV estimation method based on full matching, a technique popular in causal inference for observational data, that leverages observed covariates to make the instrument more valid. We propose an estimator along with
On Generating Optimal Signal Probabilities for Random Tests: A Genetic Approach
Directory of Open Access Journals (Sweden)
M. Srinivas
1996-01-01
Full Text Available Genetic Algorithms are robust search and optimization techniques. A Genetic Algorithm based approach for determining the optimal input distributions for generating random test vectors is proposed in the paper. A cost function based on the COP testability measure for determining the efficacy of the input distributions is discussed. A brief overview of Genetic Algorithms (GAs and the specific details of our implementation are described. Experimental results based on ISCAS-85 benchmark circuits are presented. The performance of our GAbased approach is compared with previous results. While the GA generates more efficient input distributions than the previous methods which are based on gradient descent search, the overheads of the GA in computing the input distributions are larger.
Automatic Probabilistic Program Verification through Random Variable Abstraction
Directory of Open Access Journals (Sweden)
Damián Barsotti
2010-06-01
Full Text Available The weakest pre-expectation calculus has been proved to be a mature theory to analyze quantitative properties of probabilistic and nondeterministic programs. We present an automatic method for proving quantitative linear properties on any denumerable state space using iterative backwards fixed point calculation in the general framework of abstract interpretation. In order to accomplish this task we present the technique of random variable abstraction (RVA and we also postulate a sufficient condition to achieve exact fixed point computation in the abstract domain. The feasibility of our approach is shown with two examples, one obtaining the expected running time of a probabilistic program, and the other the expected gain of a gambling strategy. Our method works on general guarded probabilistic and nondeterministic transition systems instead of plain pGCL programs, allowing us to easily model a wide range of systems including distributed ones and unstructured programs. We present the operational and weakest precondition semantics for this programs and prove its equivalence.
Rates of profit as correlated sums of random variables
Greenblatt, R. E.
2013-10-01
Profit realization is the dominant feature of market-based economic systems, determining their dynamics to a large extent. Rather than attaining an equilibrium, profit rates vary widely across firms, and the variation persists over time. Differing definitions of profit result in differing empirical distributions. To study the statistical properties of profit rates, I used data from a publicly available database for the US Economy for 2009-2010 (Risk Management Association). For each of three profit rate measures, the sample space consists of 771 points. Each point represents aggregate data from a small number of US manufacturing firms of similar size and type (NAICS code of principal product). When comparing the empirical distributions of profit rates, significant ‘heavy tails’ were observed, corresponding principally to a number of firms with larger profit rates than would be expected from simple models. An apparently novel correlated sum of random variables statistical model was used to model the data. In the case of operating and net profit rates, a number of firms show negative profits (losses), ruling out simple gamma or lognormal distributions as complete models for these data.
Blind estimation of statistical properties of non-stationary random variables
Mansour, Ali; Mesleh, Raed; Aggoune, el-Hadi M.
2014-12-01
To identify or equalize wireless transmission channels, or alternatively to evaluate the performance of many wireless communication algorithms, coefficients or statistical properties of the used transmission channels are often assumed to be known or can be estimated at the receiver end. For most of the proposed algorithms, the knowledge of transmission channel statistical properties is essential to detect signals and retrieve data. To the best of our knowledge, most proposed approaches assume that transmission channels are static and can be modeled by stationary random variables (uniform, Gaussian, exponential, Weilbul, Rayleigh, etc.). In the majority of sensor networks or cellular systems applications, transmitters and/or receivers are in motion. Therefore, the validity of static transmission channels and the underlying assumptions may not be valid. In this case, coefficients and statistical properties change and therefore the stationary model falls short of making an accurate representation. In order to estimate the statistical properties (represented by the high-order statistics and probability density function, PDF) of dynamic channels, we firstly assume that the dynamic channels can be modeled by short-term stationary but long-term non-stationary random variable (RV), i.e., the RVs are stationary within unknown successive periods but they may suddenly change their statistical properties between two successive periods. Therefore, this manuscript proposes an algorithm to detect the transition phases of non-stationary random variables and introduces an indicator based on high-order statistics for non-stationary transmission which can be used to alter channel properties and initiate the estimation process. Additionally, PDF estimators based on kernel functions are also developed. The first part of the manuscript provides a brief introduction for unbiased estimators of the second and fourth-order cumulants. Then, the non-stationary indicators are formulated
On the Distribution of Indefinite Quadratic Forms in Gaussian Random Variables
Al-Naffouri, Tareq Y.
2015-10-30
© 2015 IEEE. In this work, we propose a unified approach to evaluating the CDF and PDF of indefinite quadratic forms in Gaussian random variables. Such a quantity appears in many applications in communications, signal processing, information theory, and adaptive filtering. For example, this quantity appears in the mean-square-error (MSE) analysis of the normalized least-meansquare (NLMS) adaptive algorithm, and SINR associated with each beam in beam forming applications. The trick of the proposed approach is to replace inequalities that appear in the CDF calculation with unit step functions and to use complex integral representation of the the unit step function. Complex integration allows us then to evaluate the CDF in closed form for the zero mean case and as a single dimensional integral for the non-zero mean case. Utilizing the saddle point technique allows us to closely approximate such integrals in non zero mean case. We demonstrate how our approach can be extended to other scenarios such as the joint distribution of quadratic forms and ratios of such forms, and to characterize quadratic forms in isotropic distributed random variables.We also evaluate the outage probability in multiuser beamforming using our approach to provide an application of indefinite forms in communications.
Events of Borel Sets, Construction of Borel Sets and Random Variables for Stochastic Finance
Directory of Open Access Journals (Sweden)
Jaeger Peter
2014-09-01
Full Text Available We consider special events of Borel sets with the aim to prove, that the set of the irrational numbers is an event of the Borel sets. The set of the natural numbers, the set of the integer numbers and the set of the rational numbers are countable, so we can use the literature [10] (pp. 78-81 as a basis for the similar construction of the proof. Next we prove, that different sets can construct the Borel sets [16] (pp. 9-10. Literature [16] (pp. 9-10 and [11] (pp. 11-12 gives an overview, that there exists some other sets for this construction. Last we define special functions as random variables for stochastic finance in discrete time. The relevant functions are implemented in the article [15], see [9] (p. 4. The aim is to construct events and random variables, which can easily be used with a probability measure. See as an example theorems (10 and (14 in [20]. Then the formalization is more similar to the presentation used in the book [9]. As a background, further literatures is [3] (pp. 9-12, [13] (pp. 17-20, and [8] (pp.32-35.
Directory of Open Access Journals (Sweden)
P. Friederichs
2008-10-01
Full Text Available Probability distributions of multivariate random variables are generally more complex compared to their univariate counterparts which is due to a possible nonlinear dependence between the random variables. One approach to this problem is the use of copulas, which have become popular over recent years, especially in fields like econometrics, finance, risk management, or insurance. Since this newly emerging field includes various practices, a controversial discussion, and vast field of literature, it is difficult to get an overview. The aim of this paper is therefore to provide an brief overview of copulas for application in meteorology and climate research. We examine the advantages and disadvantages compared to alternative approaches like e.g. mixture models, summarize the current problem of goodness-of-fit (GOF tests for copulas, and discuss the connection with multivariate extremes. An application to station data shows the simplicity and the capabilities as well as the limitations of this approach. Observations of daily precipitation and temperature are fitted to a bivariate model and demonstrate, that copulas are valuable complement to the commonly used methods.
Directory of Open Access Journals (Sweden)
Wahner-Roedler Dietlind
2008-10-01
Full Text Available Abstract Background Breast cancer risk education enables women make informed decisions regarding their options for screening and risk reduction. We aimed to determine whether patient education regarding breast cancer risk using a bar graph, with or without a frequency format diagram, improved the accuracy of risk perception. Methods We conducted a prospective, randomized trial among women at increased risk for breast cancer. The main outcome measurement was patients' estimation of their breast cancer risk before and after education with a bar graph (BG group or bar graph plus a frequency format diagram (BG+FF group, which was assessed by previsit and postvisit questionnaires. Results Of 150 women in the study, 74 were assigned to the BG group and 76 to the BG+FF group. Overall, 72% of women overestimated their risk of breast cancer. The improvement in accuracy of risk perception from the previsit to the postvisit questionnaire (BG group, 19% to 61%; BG+FF group, 13% to 67% was not significantly different between the 2 groups (P = .10. Among women who inaccurately perceived very high risk (≥ 50% risk, inaccurate risk perception decreased significantly in the BG+FF group (22% to 3% compared with the BG group (28% to 19% (P = .004. Conclusion Breast cancer risk communication using a bar graph plus a frequency format diagram can improve the short-term accuracy of risk perception among women perceiving inaccurately high risk.
Migliorati, Giovanni
2015-08-28
We study the accuracy of the discrete least-squares approximation on a finite dimensional space of a real-valued target function from noisy pointwise evaluations at independent random points distributed according to a given sampling probability measure. The convergence estimates are given in mean-square sense with respect to the sampling measure. The noise may be correlated with the location of the evaluation and may have nonzero mean (offset). We consider both cases of bounded or square-integrable noise / offset. We prove conditions between the number of sampling points and the dimension of the underlying approximation space that ensure a stable and accurate approximation. Particular focus is on deriving estimates in probability within a given confidence level. We analyze how the best approximation error and the noise terms affect the convergence rate and the overall confidence level achieved by the convergence estimate. The proofs of our convergence estimates in probability use arguments from the theory of large deviations to bound the noise term. Finally we address the particular case of multivariate polynomial approximation spaces with any density in the beta family, including uniform and Chebyshev.
Directory of Open Access Journals (Sweden)
Mingle Guo
2012-01-01
Full Text Available The complete convergence for weighted sums of sequences of negatively dependent random variables is investigated. By applying moment inequality and truncation methods, the equivalent conditions of complete convergence for weighted sums of sequences of negatively dependent random variables are established. These results not only extend the corresponding results obtained by Li et al. (1995, Gut (1993, and Liang (2000 to sequences of negatively dependent random variables, but also improve them.
Fractional calculus approach to the statistical characterization of random variables and vectors
Cottone, D. ; Paola, M.D.
2015-01-01
Fractional moments have been investigated by many authors to represent the density of univariate and bivariate random variables in different contexts. Fractional moments are indeed important when the density of the random variable has inverse power-law tails and, consequently, it lacks integer order moments. In this paper, starting from the Mellin transform of the characteristic function and by fractional calculus method we present a new perspective on the statistics of random variables. Intr...
National Research Council Canada - National Science Library
Nadia Mushtaq; Noor Ul Amin; Muhammad Hanif
2017-01-01
In this article, a combined general family of estimators is proposed for estimating finite population mean of a sensitive variable in stratified random sampling with non-sensitive auxiliary variable...
Statistics of adaptive optics speckles: From probability cloud to probability density function
Yaitskova, Natalia; Gladysz, Szymon
2016-01-01
The complex amplitude in the focal plane of adaptive optics system is modelled as an elliptical complex random variable. The geometrical properties of the probability density function of such variable relate directly to the statistics of the residual phase. Building solely on the twodimensional geometry, the expression for the probability density function of speckle intensity is derived.
ten Brinke, Lisanne F.; Bolandzadeh, Niousha; Nagamatsu, Lindsay S.; Hsu, Chun Liang; Davis, Jennifer C.; Miran-Khan, Karim; Liu-Ambrose, Teresa
2015-01-01
Background Mild cognitive impairment (MCI) is a well-recognized risk factor for dementia and represents a vital opportunity for intervening. Exercise is a promising strategy for combating cognitive decline, by improving both brain structure and function. Specifically, aerobic training (AT) improved spatial memory and hippocampal volume in healthy community-dwelling older adults. In older women with probable MCI, we previously demonstrated that both resistance training (RT) and AT improved memory. In this secondary analysis, we investigated: 1) the effect of both RT and AT on hippocampal volume; and 2) the association between change in hippocampal volume and change in memory. Methods Eighty-six females aged 70 to 80 years with probable MCI were randomly assigned to a six-month, twice-weekly program of: 1) AT, 2) RT, or 3) Balance and Tone Training (BAT; i.e., control). At baseline and trial completion, participants performed a 3T magnetic resonance imaging scan to determine hippocampal volume. Verbal memory and learning was assessed by Rey’s Auditory Verbal Learning Test. Results Compared with the BAT group, AT significantly improved left, right, and total hippocampal volumes (p≤0.03). After accounting for baseline cognitive function and experimental group, increased left hippocampal volume was independently associated with reduced verbal memory and learning performance as indexed by loss after interference (r=0.42, p=0.03). Conclusion Aerobic training significantly increased hippocampal volume in older women with probable MCI. More research is needed to ascertain the relevance of exercise-induced changes in hippocampal volume on memory performance in older adults with MCI. PMID:24711660
Shiryaev, Albert N
2016-01-01
This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, the measure-theoretic foundations of probability theory, weak convergence of probability measures, and the central limit theorem. Many examples are discussed in detail, and there are a large number of exercises. The book is accessible to advanced undergraduates and can be used as a text for independent study. To accommodate the greatly expanded material in the third edition of Probability, the book is now divided into two volumes. This first volume contains updated references and substantial revisions of the first three chapters of the second edition. In particular, new material has been added on generating functions, the inclusion-exclusion principle, theorems on monotonic classes (relying on a detailed treatment of “π-λ” systems), and the fundamental theorems of mathematical statistics.
A Novel Method for Increasing the Entropy of a Sequence of Independent, Discrete Random Variables
Directory of Open Access Journals (Sweden)
Mieczyslaw Jessa
2015-10-01
Full Text Available In this paper, we propose a novel method for increasing the entropy of a sequence of independent, discrete random variables with arbitrary distributions. The method uses an auxiliary table and a novel theorem that concerns the entropy of a sequence in which the elements are a bitwise exclusive-or sum of independent discrete random variables.
Directory of Open Access Journals (Sweden)
Yongfeng Wu
2014-01-01
Full Text Available The authors first present a Rosenthal inequality for sequence of extended negatively dependent (END random variables. By means of the Rosenthal inequality, the authors obtain some complete moment convergence and mean convergence results for arrays of rowwise END random variables. The results in this paper extend and improve the corresponding theorems by Hu and Taylor (1997.
Raw and Central Moments of Binomial Random Variables via Stirling Numbers
Griffiths, Martin
2013-01-01
We consider here the problem of calculating the moments of binomial random variables. It is shown how formulae for both the raw and the central moments of such random variables may be obtained in a recursive manner utilizing Stirling numbers of the first kind. Suggestions are also provided as to how students might be encouraged to explore this…
Directory of Open Access Journals (Sweden)
Bogdan Gheorghe Munteanu
2013-01-01
Full Text Available Using the stochastic approximations, in this paper it was studiedthe convergence in distribution of the fractional parts of the sum of random variables to the truncated exponential distribution with parameter lambda. This fact is feasible by means of the Fourier-Stieltjes sequence (FSS of the random variable.
Strong Laws of Large Numbers for Arrays of Rowwise NA and LNQD Random Variables
Directory of Open Access Journals (Sweden)
Jiangfeng Wang
2011-01-01
Full Text Available Some strong laws of large numbers and strong convergence properties for arrays of rowwise negatively associated and linearly negative quadrant dependent random variables are obtained. The results obtained not only generalize the result of Hu and Taylor to negatively associated and linearly negative quadrant dependent random variables, but also improve it.
Upgrading Probability via Fractions of Events
Directory of Open Access Journals (Sweden)
Frič Roman
2016-08-01
Full Text Available The influence of “Grundbegriffe” by A. N. Kolmogorov (published in 1933 on education in the area of probability and its impact on research in stochastics cannot be overestimated. We would like to point out three aspects of the classical probability theory “calling for” an upgrade: (i classical random events are black-and-white (Boolean; (ii classical random variables do not model quantum phenomena; (iii basic maps (probability measures and observables { dual maps to random variables have very different “mathematical nature”. Accordingly, we propose an upgraded probability theory based on Łukasiewicz operations (multivalued logic on events, elementary category theory, and covering the classical probability theory as a special case. The upgrade can be compared to replacing calculations with integers by calculations with rational (and real numbers. Namely, to avoid the three objections, we embed the classical (Boolean random events (represented by the f0; 1g-valued indicator functions of sets into upgraded random events (represented by measurable {0; 1}-valued functions, the minimal domain of probability containing “fractions” of classical random events, and we upgrade the notions of probability measure and random variable.
Directory of Open Access Journals (Sweden)
Nadia Mushtaq
2017-03-01
Full Text Available In this article, a combined general family of estimators is proposed for estimating finite population mean of a sensitive variable in stratified random sampling with non-sensitive auxiliary variable based on randomized response technique. Under stratified random sampling without replacement scheme, the expression of bias and mean square error (MSE up to the first-order approximations are derived. Theoretical and empirical results through a simulation study show that the proposed class of estimators is more efficient than the existing estimators, i.e., usual stratified random sample mean estimator, Sousa et al (2014 ratio and regression estimator of the sensitive variable in stratified sampling.
Fractional calculus approach to the statistical characterization of random variables and vectors
Cottone, Giulio; Di Paola, Mario; Metzler, Ralf
2010-03-01
Fractional moments have been investigated by many authors to represent the density of univariate and bivariate random variables in different contexts. Fractional moments are indeed important when the density of the random variable has inverse power-law tails and, consequently, it lacks integer order moments. In this paper, starting from the Mellin transform of the characteristic function and by fractional calculus method we present a new perspective on the statistics of random variables. Introducing the class of complex moments, that include both integer and fractional moments, we show that every random variable can be represented within this approach, even if its integer moments diverge. Applications to the statistical characterization of raw data and in the representation of both random variables and vectors are provided, showing that the good numerical convergence makes the proposed approach a good and reliable tool also for practical data analysis.
A New Estimator For Population Mean Using Two Auxiliary Variables in Stratified random Sampling
Singh, Rajesh; Malik, Sachin
2014-01-01
In this paper, we suggest an estimator using two auxiliary variables in stratified random sampling. The propose estimator has an improvement over mean per unit estimator as well as some other considered estimators. Expressions for bias and MSE of the estimator are derived up to first degree of approximation. Moreover, these theoretical findings are supported by a numerical example with original data. Key words: Study variable, auxiliary variable, stratified random sampling, bias and mean squa...
Concentrated Hitting Times of Randomized Search Heuristics with Variable Drift
DEFF Research Database (Denmark)
Lehre, Per Kristian; Witt, Carsten
2014-01-01
Drift analysis is one of the state-of-the-art techniques for the runtime analysis of randomized search heuristics (RSHs) such as evolutionary algorithms (EAs), simulated annealing etc. The vast majority of existing drift theorems yield bounds on the expected value of the hitting time for a target...
Some limit theorems for negatively associated random variables
Indian Academy of Sciences (India)
Abstract. Let {Xn,n ≥ 1} be a sequence of negatively associated random vari- ables. The aim of this paper is to establish some limit theorems of negatively associated sequence, which include the Lp-convergence theorem and Marcinkiewicz–Zygmund strong law of large numbers. Furthermore, we consider the strong law of ...
Local search methods based on variable focusing for random K -satisfiability
Lemoy, Rémi; Alava, Mikko; Aurell, Erik
2015-01-01
We introduce variable focused local search algorithms for satisfiabiliity problems. Usual approaches focus uniformly on unsatisfied clauses. The methods described here work by focusing on random variables in unsatisfied clauses. Variants are considered where variables are selected uniformly and randomly or by introducing a bias towards picking variables participating in several unsatistified clauses. These are studied in the case of the random 3-SAT problem, together with an alternative energy definition, the number of variables in unsatisfied constraints. The variable-based focused Metropolis search (V-FMS) is found to be quite close in performance to the standard clause-based FMS at optimal noise. At infinite noise, instead, the threshold for the linearity of solution times with instance size is improved by picking preferably variables in several UNSAT clauses. Consequences for algorithmic design are discussed.
An introduction to probability and stochastic processes
Melsa, James L
2013-01-01
Geared toward college seniors and first-year graduate students, this text is designed for a one-semester course in probability and stochastic processes. Topics covered in detail include probability theory, random variables and their functions, stochastic processes, linear system response to stochastic processes, Gaussian and Markov processes, and stochastic differential equations. 1973 edition.
Nechaev, S
2003-01-01
We investigate the statistical properties of random walks on the simplest nontrivial braid group B sub 3 , and on related hyperbolic groups. We provide a method using Cayley graphs of groups allowing us to compute explicitly the probability distribution of the basic statistical characteristics of random trajectories - the drift and the return probability. The action of the groups under consideration in the hyperbolic plane is investigated, and the distribution of a geometric invariant - the hyperbolic distance - is analysed. It is shown that a random walk on B sub 3 can be viewed as a 'magnetic random walk' on the group PSL(2, Z).
Energy Technology Data Exchange (ETDEWEB)
Nechaev, Sergei [Laboratoire de Physique Theorique et Modeles Statistiques, Universite Paris Sud, 91405 Orsay Cedex (France); Voituriez, Raphael [Laboratoire de Physique Theorique et Modeles Statistiques, Universite Paris Sud, 91405 Orsay Cedex (France)
2003-01-10
We investigate the statistical properties of random walks on the simplest nontrivial braid group B{sub 3}, and on related hyperbolic groups. We provide a method using Cayley graphs of groups allowing us to compute explicitly the probability distribution of the basic statistical characteristics of random trajectories - the drift and the return probability. The action of the groups under consideration in the hyperbolic plane is investigated, and the distribution of a geometric invariant - the hyperbolic distance - is analysed. It is shown that a random walk on B{sub 3} can be viewed as a 'magnetic random walk' on the group PSL(2, Z)
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...
National Research Council Canada - National Science Library
Bogdan Gheorghe Munteanu
2013-01-01
Using the stochastic approximations, in this paper it was studiedthe convergence in distribution of the fractional parts of the sum of random variables to the truncated exponential distribution with parameter lambda...
Hung, Tran Loc; Giang, Le Truong
2016-01-01
Using the Stein-Chen method some upper bounds in Poisson approximation for distributions of row-wise triangular arrays of independent negative-binomial distributed random variables are established in this note.
Lapko, A. V.; Lapko, V. A.; Yuronen, E. A.
2016-11-01
The new technique of testing of hypothesis of random variables independence is offered. Its basis is made by nonparametric algorithm of pattern recognition. The considered technique doesn't demand sampling of area of values of random variables.
A large-scale study of the random variability of a coding sequence: a study on the CFTR gene.
Modiano, Guido; Bombieri, Cristina; Ciminelli, Bianca Maria; Belpinati, Francesca; Giorgi, Silvia; Georges, Marie des; Scotet, Virginie; Pompei, Fiorenza; Ciccacci, Cinzia; Guittard, Caroline; Audrézet, Marie Pierre; Begnini, Angela; Toepfer, Michael; Macek, Milan; Ferec, Claude; Claustres, Mireille; Pignatti, Pier Franco
2005-02-01
Coding single nucleotide substitutions (cSNSs) have been studied on hundreds of genes using small samples (n(g) approximately 100-150 genes). In the present investigation, a large random European population sample (average n(g) approximately 1500) was studied for a single gene, the CFTR (Cystic Fibrosis Transmembrane conductance Regulator). The nonsynonymous (NS) substitutions exhibited, in accordance with previous reports, a mean probability of being polymorphic (q > 0.005), much lower than that of the synonymous (S) substitutions, but they showed a similar rate of subpolymorphic (q < 0.005) variability. This indicates that, in autosomal genes that may have harmful recessive alleles (nonduplicated genes with important functions), genetic drift overwhelms selection in the subpolymorphic range of variability, making disadvantageous alleles behave as neutral. These results imply that the majority of the subpolymorphic nonsynonymous alleles of these genes are selectively negative or even pathogenic.
Separating variability in healthcare practice patterns from random error.
Thomas, Laine E; Schulte, Phillip J
2018-01-01
Improving the quality of care that patients receive is a major focus of clinical research, particularly in the setting of cardiovascular hospitalization. Quality improvement studies seek to estimate and visualize the degree of variability in dichotomous treatment patterns and outcomes across different providers, whereby naive techniques either over-estimate or under-estimate the actual degree of variation. Various statistical methods have been proposed for similar applications including (1) the Gaussian hierarchical model, (2) the semi-parametric Bayesian hierarchical model with a Dirichlet process prior and (3) the non-parametric empirical Bayes approach of smoothing by roughening. Alternatively, we propose that a recently developed method for density estimation in the presence of measurement error, moment-adjusted imputation, can be adapted for this problem. The methods are compared by an extensive simulation study. In the present context, we find that the Bayesian methods are sensitive to the choice of prior and tuning parameters, whereas moment-adjusted imputation performs well with modest sample size requirements. The alternative approaches are applied to identify disparities in the receipt of early physician follow-up after myocardial infarction across 225 hospitals in the CRUSADE registry.
The National Aquatic Resource Surveys (NARS) use probability-survey designs to assess the condition of the nation’s waters. In probability surveys (also known as sample-surveys or statistical surveys), sampling sites are selected randomly.
Directory of Open Access Journals (Sweden)
Maximiano Pinheiro
2012-01-01
Full Text Available Marginal probability density and cumulative distribution functions are presented for multidimensional variables defined by nonsingular affine transformations of vectors of independent two-piece normal variables, the most important subclass of Ferreira and Steel's general multivariate skewed distributions. The marginal functions are obtained by first expressing the joint density as a mixture of Arellano-Valle and Azzalini's unified skew-normal densities and then using the property of closure under marginalization of the latter class.
Maximiano Pinheiro
2012-01-01
Marginal probability density and cumulative distribution functions are presented for multidimensional variables defined by nonsingular affine transformations of vectors of independent two-piece normal variables, the most important subclass of Ferreira and Steel's general multivariate skewed distributions. The marginal functions are obtained by first expressing the joint density as a mixture of Arellano-Valle and Azzalini's unified skew-normal densities and then using the property of closure u...
DEFF Research Database (Denmark)
Kessler, Timo Christian; Nilsson, Bertel; Klint, Knud Erik
2010-01-01
The construction of detailed geological models for heterogeneous settings such as clay till is important to describe transport processes, particularly with regard to potential contamination pathways. In low-permeability clay matrices transport is controlled by diffusion, but fractures and sand...... of sand-lenses in clay till. Sand-lenses mainly account for horizontal transport and are prioritised in this study. Based on field observations, the distribution has been modeled using two different geostatistical approaches. One method uses a Markov chain model calculating the transition probabilities...
Energy Technology Data Exchange (ETDEWEB)
Romero, Vicente [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bonney, Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schroeder, Benjamin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weirs, V. Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-11-01
When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a class of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10^{-4} probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.
Trigila, Alessandro; Iadanza, Carla; Esposito, Carlo; Scarascia-Mugnozza, Gabriele
2015-04-01
first phase of the work addressed to identify the spatial relationships between the landslides location and the 13 related factors by using the Frequency Ratio bivariate statistical method. The analysis was then carried out by adopting a multivariate statistical approach, according to the Logistic Regression technique and Random Forests technique that gave best results in terms of AUC. The models were performed and evaluated with different sample sizes and also taking into account the temporal variation of input variables such as burned areas by wildfire. The most significant outcome of this work are: the relevant influence of the sample size on the model results and the strong importance of some environmental factors (e.g. land use and wildfires) for the identification of the depletion zones of extremely rapid shallow landslides.
Energy Technology Data Exchange (ETDEWEB)
Lehua Pan; G.S. Bodvarsson
2001-10-22
Multiscale features of transport processes in fractured porous media make numerical modeling a difficult task, both in conceptualization and computation. Modeling the mass transfer through the fracture-matrix interface is one of the critical issues in the simulation of transport in a fractured porous medium. Because conventional dual-continuum-based numerical methods are unable to capture the transient features of the diffusion depth into the matrix (unless they assume a passive matrix medium), such methods will overestimate the transport of tracers through the fractures, especially for the cases with large fracture spacing, resulting in artificial early breakthroughs. We have developed a new method for calculating the particle-transfer probability that can capture the transient features of diffusion depth into the matrix within the framework of the dual-continuum random-walk particle method (RWPM) by introducing a new concept of activity range of a particle within the matrix. Unlike the multiple-continuum approach, the new dual-continuum RWPM does not require using additional grid blocks to represent the matrix. It does not assume a passive matrix medium and can be applied to the cases where global water flow exists in both continua. The new method has been verified against analytical solutions for transport in the fracture-matrix systems with various fracture spacing. The calculations of the breakthrough curves of radionuclides from a potential repository to the water table in Yucca Mountain demonstrate the effectiveness of the new method for simulating 3-D, mountain-scale transport in a heterogeneous, fractured porous medium under variably saturated conditions.
Wright, Marvin N; Dankowski, Theresa; Ziegler, Andreas
2017-04-15
The most popular approach for analyzing survival data is the Cox regression model. The Cox model may, however, be misspecified, and its proportionality assumption may not always be fulfilled. An alternative approach for survival prediction is random forests for survival outcomes. The standard split criterion for random survival forests is the log-rank test statistic, which favors splitting variables with many possible split points. Conditional inference forests avoid this split variable selection bias. However, linear rank statistics are utilized by default in conditional inference forests to select the optimal splitting variable, which cannot detect non-linear effects in the independent variables. An alternative is to use maximally selected rank statistics for the split point selection. As in conditional inference forests, splitting variables are compared on the p-value scale. However, instead of the conditional Monte-Carlo approach used in conditional inference forests, p-value approximations are employed. We describe several p-value approximations and the implementation of the proposed random forest approach. A simulation study demonstrates that unbiased split variable selection is possible. However, there is a trade-off between unbiased split variable selection and runtime. In benchmark studies of prediction performance on simulated and real datasets, the new method performs better than random survival forests if informative dichotomous variables are combined with uninformative variables with more categories and better than conditional inference forests if non-linear covariate effects are included. In a runtime comparison, the method proves to be computationally faster than both alternatives, if a simple p-value approximation is used. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
van der Zwan, Judith Esi; de Vente, Wieke; Huizink, Anja C.; B?gels, Susan M.; de Bruin, Esther I.
2015-01-01
In contemporary western societies stress is highly prevalent, therefore the need for stress-reducing methods is great. This randomized controlled trial compared the efficacy of self-help physical activity (PA), mindfulness meditation (MM), and heart rate variability biofeedback (HRV-BF) in reducing stress and its related symptoms. We randomly allocated 126 participants to PA, MM, or HRV-BF upon enrollment, of whom 76 agreed to participate. The interventions consisted of psycho-education and a...
Yeh, Tzu-Sheng; Lee, Shen-Ming
2006-01-01
An optimal stopping rule is a rule that stops the sampling process at a sample size n that maximizes the expected reward. In this paper we will study the approximation to optimal stopping rule for Gumbel random variables, because the Gumbel-type distribution is the most commonly referred to in discussions of extreme values. Let $X_1, X_2,\\cdots X_n,\\cdots$ be independent, identically distributed Gumbel random variables with unknown location and scale parameters,$\\alpha$ and $\\beta$. If we def...
$\\Phi$-moment inequalities for independent and freely independent random variables
Jiao, Yong; Sukochev, Fedor; Xie, Guangheng; Zanin, Dmitriy
2016-01-01
This paper is devoted to the study of $\\Phi$-moments of sums of independent/freely independent random variables. More precisely, let $(f_k)_{k=1}^n$ be a sequence of positive (symmetrically distributed) independent random variables and let $\\Phi$ be an Orlicz function with $\\Delta_2$-condition. We provide an equivalent expression for the quantity $\\mathbb{E}(\\Phi(\\sum_{k=1}^n f_k))$ in term of the sum of disjoint copies of the sequence $(f_k)_{k=1}^n.$ We also prove an analogous result in the...
Directory of Open Access Journals (Sweden)
Correchel Vladia
2005-01-01
Full Text Available The precision of the 137Cs fallout redistribution technique for the evaluation of soil erosion rates is strongly dependent on the quality of an average inventory taken at a representative reference site. The knowledge of the sources and of the degree of variation of the 137Cs fallout spatial distribution plays an important role on its use. Four reference sites were selected in the South-Central region of Brazil which were characterized in terms of soil chemical, physical and mineralogical aspects as well as the spatial variability of 137Cs inventories. Some important differences in the patterns of 137Cs depth distribution in the soil profiles of the different sites were found. They are probably associated to chemical, physical, mineralogical and biological differences of the soils but many questions still remain open for future investigation, mainly those regarding the adsorption and dynamics of the 137Cs ions in soil profiles under tropical conditions. The random spatial variability (inside each reference site was higher than the systematic spatial variability (between reference sites but their causes were not clearly identified as possible consequences of chemical, physical, mineralogical variability, and/or precipitation.
Higher order moments of a sum of random variables: remarks and applications.
Directory of Open Access Journals (Sweden)
Luisa Tibiletti
1996-02-01
Full Text Available The moments of a sum of random variables depend on both the pure moments of each random addendum and on the addendum mixed moments. In this note we introduce a simple measure to evaluate the relative impedance to attach to the latter. Once the pure moments are fixed, the functional relation between the random addenda leading to the extreme values is also provided. Applications to Finance, Decision Theory and Actuarial Sciences are also suggested.
Introduction to probability and statistics for science, engineering, and finance
Rosenkrantz, Walter A
2008-01-01
Data Analysis Orientation The Role and Scope of Statistics in Science and Engineering Types of Data: Examples from Engineering, Public Health, and Finance The Frequency Distribution of a Variable Defined on a Population Quantiles of a Distribution Measures of Location (Central Value) and Variability Covariance, Correlation, and Regression: Computing a Stock's Beta Mathematical Details and Derivations Large Data Sets Probability Theory Orientation Sample Space, Events, Axioms of Probability Theory Mathematical Models of Random Sampling Conditional Probability and Baye
1981-05-01
involving modified Bessel function and Fox’s H- function," Rivista di Matematica della Universita di Parma. Italie, (3) 2, 109-113 (1973). function...Ciencias Matematicas , A(2) 14, 105-111 (1971-72). 100. Dahiya, R. S., "ultiple integrals and the transformations involving H- functions and Tchebichef...transform," R di Matematica della Universita di Parma. Italie, 5, 159- 164 (1964). 117. Gupta, R. K., and S. D. Sharma, "Some infinite integrals involv- ing
Nam, Sung Sik; Yang, Hong-Chuan
2010-01-01
Order statistics find applications in various areas of communications and signal processing. In this paper, we introduce an unified analytical framework to determine the joint statistics of partial sums of ordered random variables (RVs). With the proposed approach, we can systematically derive the joint statistics of any partial sums of ordered statistics, in terms of the moment generating function (MGF) and the probability density function (PDF). Our MGF-based approach applies not only when all the K ordered RVs are involved but also when only the Ks (Ks < K) best RVs are considered. In addition, we present the closed-form expressions for the exponential RV special case. These results apply to the performance analysis of various wireless communication systems over fading channels.
Nam, Sungsik
2010-11-01
Order statistics find applications in various areas of communications and signal processing. In this paper, we introduce an unified analytical framework to determine the joint statistics of partial sums of ordered random variables (RVs). With the proposed approach, we can systematically derive the joint statistics of any partial sums of ordered statistics, in terms of the moment generating function (MGF) and the probability density function (PDF). Our MGF-based approach applies not only when all the K ordered RVs are involved but also when only the Ks(Ks < K) best RVs are considered. In addition, we present the closed-form expressions for the exponential RV special case. These results apply to the performance analysis of various wireless communication systems over fading channels. © 2006 IEEE.
Eisinga, R.N.; Grotenhuis, H.F. te; Pelzer, B.J.
2013-01-01
We discuss saddlepoint approximations to the distribution of the sum of independent non-identically distributed binomial random variables. We examine the accuracy of the saddlepoint methods for a sum of 10 binomials with different sets of parameter values. The numerical results indicate that the
van der Zwan, J.E.; de Vente, W.; Huizink, A.C.; Bögels, S.M.; de Bruin, E.I.
2015-01-01
In contemporary western societies stress is highly prevalent, therefore the need for stress-reducing methods is great. This randomized controlled trial compared the efficacy of self-help physical activity (PA), mindfulness meditation (MM), and heart rate variability biofeedback (HRV-BF) in reducing
Bounds for right tails of deterministic and stochastic sums of random variables
Darkiewicz, G.; Deelstra, G.; Dhaene, J.; Hoedemakers, T.; Vanmaele, M.
2009-01-01
We investigate lower and upper bounds for right tails (stop-loss premiums) of deterministic and stochastic sums of nonindependent random variables. The bounds are derived using the concepts of comonotonicity, convex order, and conditioning. The performance of the presented approximations is
Kapwata, Thandi; Gebreslasie, Michael T
2016-11-16
Malaria is an environmentally driven disease. In order to quantify the spatial variability of malaria transmission, it is imperative to understand the interactions between environmental variables and malaria epidemiology at a micro-geographic level using a novel statistical approach. The random forest (RF) statistical learning method, a relatively new variable-importance ranking method, measures the variable importance of potentially influential parameters through the percent increase of the mean squared error. As this value increases, so does the relative importance of the associated variable. The principal aim of this study was to create predictive malaria maps generated using the selected variables based on the RF algorithm in the Ehlanzeni District of Mpumalanga Province, South Africa. From the seven environmental variables used [temperature, lag temperature, rainfall, lag rainfall, humidity, altitude, and the normalized difference vegetation index (NDVI)], altitude was identified as the most influential predictor variable due its high selection frequency. It was selected as the top predictor for 4 out of 12 months of the year, followed by NDVI, temperature and lag rainfall, which were each selected twice. The combination of climatic variables that produced the highest prediction accuracy was altitude, NDVI, and temperature. This suggests that these three variables have high predictive capabilities in relation to malaria transmission. Furthermore, it is anticipated that the predictive maps generated from predictions made by the RF algorithm could be used to monitor the progression of malaria and assist in intervention and prevention efforts with respect to malaria.
Directory of Open Access Journals (Sweden)
Thandi Kapwata
2016-11-01
Full Text Available Malaria is an environmentally driven disease. In order to quantify the spatial variability of malaria transmission, it is imperative to understand the interactions between environmental variables and malaria epidemiology at a micro-geographic level using a novel statistical approach. The random forest (RF statistical learning method, a relatively new variable-importance ranking method, measures the variable importance of potentially influential parameters through the percent increase of the mean squared error. As this value increases, so does the relative importance of the associated variable. The principal aim of this study was to create predictive malaria maps generated using the selected variables based on the RF algorithm in the Ehlanzeni District of Mpumalanga Province, South Africa. From the seven environmental variables used [temperature, lag temperature, rainfall, lag rainfall, humidity, altitude, and the normalized difference vegetation index (NDVI], altitude was identified as the most influential predictor variable due its high selection frequency. It was selected as the top predictor for 4 out of 12 months of the year, followed by NDVI, temperature and lag rainfall, which were each selected twice. The combination of climatic variables that produced the highest prediction accuracy was altitude, NDVI, and temperature. This suggests that these three variables have high predictive capabilities in relation to malaria transmission. Furthermore, it is anticipated that the predictive maps generated from predictions made by the RF algorithm could be used to monitor the progression of malaria and assist in intervention and prevention efforts with respect to malaria.
Directory of Open Access Journals (Sweden)
Gabriel Rodríguez
2016-06-01
Full Text Available Following Xu and Perron (2014, I applied the extended RLS model to the daily stock market returns of Argentina, Brazil, Chile, Mexico and Peru. This model replaces the constant probability of level shifts for the entire sample with varying probabilities that record periods with extremely negative returns. Furthermore, it incorporates a mean reversion mechanism with which the magnitude and the sign of the level shift component vary in accordance with past level shifts that deviate from the long-term mean. Therefore, four RLS models are estimated: the Basic RLS, the RLS with varying probabilities, the RLS with mean reversion, and a combined RLS model with mean reversion and varying probabilities. The results show that the estimated parameters are highly significant, especially that of the mean reversion model. An analysis of ARFIMA and GARCH models is also performed in the presence of level shifts, which shows that once these shifts are taken into account in the modeling, the long memory characteristics and GARCH effects disappear. Also, I find that the performance prediction of the RLS models is superior to the classic models involving long memory as the ARFIMA(p,d,q models, the GARCH and the FIGARCH models. The evidence indicates that except in rare exceptions, the RLS models (in all its variants are showing the best performance or belong to the 10% of the Model Confidence Set (MCS. On rare occasions the GARCH and the ARFIMA models appear to dominate but they are rare exceptions. When the volatility is measured by the squared returns, the great exception is Argentina where a dominance of GARCH and FIGARCH models is appreciated.
Partial summations of stationary sequences of non-Gaussian random variables
DEFF Research Database (Denmark)
Mohr, Gunnar; Ditlevsen, Ove Dalager
1996-01-01
The distribution of the sum of a finite number of identically distributed random variables is in many cases easily determined given that the variables are independent. The moments of any order of the sum can always be expressed by the moments of the single term without computational problems...... of convergence of the distribution of a sum (or an integral) of mutually dependent random variables to the Gaussian distribution. The paper is closely related to the work in Ditlevsen el al. [Ditlevsen, O., Mohr, G. & Hoffmeyer, P. Integration of non-Gaussian fields. Prob. Engng Mech 11 (1996) 15-23](2)........ However, in the case of dependency between the terms even calculation of a few of the first moments of the sum presents serious computational problems. By use of computerized symbol manipulations it is practicable to obtain exact moments of partial sums of stationary sequences of mutually dependent...
Bhattacharyya, Pratip; Chakrabarti, Bikas K.
2008-01-01
We study different ways of determining the mean distance (r[subscript n]) between a reference point and its nth neighbour among random points distributed with uniform density in a D-dimensional Euclidean space. First, we present a heuristic method; though this method provides only a crude mathematical result, it shows a simple way of estimating…
Maximum-entropy probability distributions under Lp-norm constraints
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
Garborg, Kjetil; Wiig, Håvard; Hasund, Audun; Matre, Jon; Holme, Øyvind; Noraberg, Geir; Løberg, Magnus; Kalager, Mette; Adami, Hans-Olov; Bretthauer, Michael
2017-02-01
Colonoscopes with gradual stiffness have recently been developed to enhance cecal intubation. We aimed to determine if the performance of gradual stiffness colonoscopes is noninferior to that of magnetic endoscopic imaging (MEI)-guided variable stiffness colonoscopes. Consecutive patients were randomized to screening colonoscopy with Fujifilm gradual stiffness or Olympus MEI-guided variable stiffness colonoscopes. The primary endpoint was cecal intubation rate (noninferiority limit 5%). Secondary endpoints included cecal intubation time. We estimated absolute risk differences with 95% confidence intervals (CIs). We enrolled 475 patients: 222 randomized to the gradual stiffness instrument, and 253 to the MEI-guided variable stiffness instrument. Cecal intubation rate was 91.7% in the gradual stiffness group versus 95.6% in the variable stiffness group. The adjusted absolute risk for cecal intubation failure was 4.3% higher in the gradual stiffness group than in the variable stiffness group (upper CI border 8.1%). Median cecal intubation time was 13 minutes in the gradual stiffness group and 10 minutes in the variable stiffness group (p < 0.001). The study is inconclusive with regard to noninferiority because the 95% CI for the difference in cecal intubation rate between the groups crosses the noninferiority margin. (ClinicalTrials.gov identifier: NCT01895504).
Wang, Kezhi
2015-06-01
Exact results for the probability density function (PDF) and cumulative distribution function (CDF) of the sum of ratios of products (SRP) and the sum of products (SP) of independent α-μ random variables (RVs) are derived. They are in the form of 1-D integral based on the existing works on the products and ratios of α-μ RVs. In the derivation, generalized Gamma (GG) ratio approximation (GGRA) is proposed to approximate SRP. Gamma ratio approximation (GRA) is proposed to approximate SRP and the ratio of sums of products (RSP). GG approximation (GGA) and Gamma approximation (GA) are used to approximate SP. The proposed results of the SRP can be used to calculate the outage probability (OP) for wireless multihop relaying systems or multiple scattering channels with interference. The proposed results of the SP can be used to calculate the OP for these systems without interference. In addition, the proposed approximate result of the RSP can be used to calculate the OP of the signal-To-interference ratio (SIR) in a multiple scattering system with interference. © 1967-2012 IEEE.
Directory of Open Access Journals (Sweden)
Zhou Sheng Jie
2016-01-01
Full Text Available A MAC protocol for public bus networks, called Bus MAC protocol, designed to provide high quality Internet service for bus passengers. The paper proposed a multi-channel dual clocks three-demission probability random multiple access protocol based on RTS/CTS mechanism, decreasing collisions caused by multiple access from multiple passengers. Use the RTS/CTS mechanism increases the reliability and stability of the system, reducing the collision possibility of the information packets to a certain extent, improves the channel utilization; use the multi-channel mechanism, not only enables the channel load balancing, but also solves the problem of the hidden terminal and exposed terminal. Use the dual clocks mechanism, reducing the system idle time. At last, the different selection of the three-dimensional probabilities can make the system throughput adapt to the network load which could realize the maximum of the system throughput.
Probability on real Lie algebras
Franz, Uwe
2016-01-01
This monograph is a progressive introduction to non-commutativity in probability theory, summarizing and synthesizing recent results about classical and quantum stochastic processes on Lie algebras. In the early chapters, focus is placed on concrete examples of the links between algebraic relations and the moments of probability distributions. The subsequent chapters are more advanced and deal with Wigner densities for non-commutative couples of random variables, non-commutative stochastic processes with independent increments (quantum Lévy processes), and the quantum Malliavin calculus. This book will appeal to advanced undergraduate and graduate students interested in the relations between algebra, probability, and quantum theory. It also addresses a more advanced audience by covering other topics related to non-commutativity in stochastic calculus, Lévy processes, and the Malliavin calculus.
Effects of Yoga on Heart Rate Variability and Mood in Women: A Randomized Controlled Trial.
Chu, I-Hua; Lin, Yuh-Jen; Wu, Wen-Lan; Chang, Yu-Kai; Lin, I-Mei
2015-12-01
To examine the effects of an 8-week yoga program on heart rate variability and mood in generally healthy women. Randomized controlled trial. Fifty-two healthy women were randomly assigned to a yoga group or a control group. Participants in the yoga group completed an 8-week yoga program, which comprised a 60-minute session twice a week. Each session consisted of breathing exercises, yoga pose practice, and supine meditation/relaxation. The control group was instructed not to engage in any yoga practice and to maintain their usual level of physical activity during the study. Participants' heart rate variability, perceived stress, depressive symptoms, and state and trait anxiety were assessed at baseline (week 0) and after the intervention (week 9). No measures of heart rate variability changed significantly in either the yoga or control group after intervention. State anxiety was reduced significantly in the yoga group but not in the control group. No significant changes were noted in perceived stress, depression, or trait anxiety in either group. An 8-week yoga program was not sufficient to improve heart rate variability. However, such a program appears to be effective in reducing state anxiety in generally healthy women. Future research should involve longer periods of yoga training, include heart rate variability measures both at rest and during yoga practice, and enroll women with higher levels of stress and trait anxiety.
El-Melegy, Moumen T
2013-07-01
This paper addresses the problem of fitting a functional model to data corrupted with outliers using a multilayered feed-forward neural network. Although it is of high importance in practical applications, this problem has not received careful attention from the neural network research community. One recent approach to solving this problem is to use a neural network training algorithm based on the random sample consensus (RANSAC) framework. This paper proposes a new algorithm that offers two enhancements over the original RANSAC algorithm. The first one improves the algorithm accuracy and robustness by employing an M-estimator cost function to decide on the best estimated model from the randomly selected samples. The other one improves the time performance of the algorithm by utilizing a statistical pretest based on Wald's sequential probability ratio test. The proposed algorithm is successfully evaluated on synthetic and real data, contaminated with varying degrees of outliers, and compared with existing neural network training algorithms.
Quantum Probability, Orthogonal Polynomials and Quantum Field Theory
Accardi, Luigi
2017-03-01
The main thesis of the present paper is that: Quantum Probability is not a generalization of classical probability, but it is a deeper level of it. Classical random variables have an intrinsic (microscopic) non-commutative structure that generalize usual quantum theory. The study of this generalization is the core of the non-linear quantization program.
Özel, Gamze
2015-01-01
In this paper, a new exponential type estimator is developed in the stratified random sampling for the population mean using auxiliary variable information. In order to evaluate efﬁciency of the introduced estimator, we ﬁrst review some estimators and study the optimum property of the suggested strategy. To judge the merits of the suggested class of estimators over others under the optimal condition, simulation study and real data applications are conducted. The results show that the introduc...
Directory of Open Access Journals (Sweden)
Qunying Wu
2017-05-01
Full Text Available Abstract In this paper, we study the equivalent conditions of complete moment convergence for sequences of identically distributed extended negatively dependent random variables. As a result, we extend and generalize some results of complete moment convergence obtained by Chow (Bull. Inst. Math. Acad. Sin. 16:177-201, 1988 and Li and Spătaru (J. Theor. Probab. 18:933-947, 2005 from the i.i.d. case to extended negatively dependent sequences.
An edgeworth expansion for a sum of M-Dependent random variables
Directory of Open Access Journals (Sweden)
Wan Soo Rhee
1985-01-01
Full Text Available Given a sequence X1,X2,…,Xn of m-dependent random variables with moments of order 3+α (0<α≦1, we give an Edgeworth expansion of the distribution of Sσ−1(S=X1+X2+…+Xn, σ2=ES2 under the assumption that E[exp(it Sσ1] is small away from the origin. The result is of the best possible order.
Residual and Past Entropy for Concomitants of Ordered Random Variables of Morgenstern Family
Directory of Open Access Journals (Sweden)
M. M. Mohie EL-Din
2015-01-01
Full Text Available For a system, which is observed at time t, the residual and past entropies measure the uncertainty about the remaining and the past life of the distribution, respectively. In this paper, we have presented the residual and past entropy of Morgenstern family based on the concomitants of the different types of generalized order statistics (gos and give the linear transformation of such model. Characterization results for these dynamic entropies for concomitants of ordered random variables have been considered.
The effect of cluster size variability on statistical power in cluster-randomized trials.
Directory of Open Access Journals (Sweden)
Stephen A Lauer
Full Text Available The frequency of cluster-randomized trials (CRTs in peer-reviewed literature has increased exponentially over the past two decades. CRTs are a valuable tool for studying interventions that cannot be effectively implemented or randomized at the individual level. However, some aspects of the design and analysis of data from CRTs are more complex than those for individually randomized controlled trials. One of the key components to designing a successful CRT is calculating the proper sample size (i.e. number of clusters needed to attain an acceptable level of statistical power. In order to do this, a researcher must make assumptions about the value of several variables, including a fixed mean cluster size. In practice, cluster size can often vary dramatically. Few studies account for the effect of cluster size variation when assessing the statistical power for a given trial. We conducted a simulation study to investigate how the statistical power of CRTs changes with variable cluster sizes. In general, we observed that increases in cluster size variability lead to a decrease in power.
Laze, Kuenda
2016-08-01
Modelling of land use may be improved by incorporating the results of species distribution modelling and species distribution modelling may be upgraded if a variable of the process-based variable of forest cover change or accessibility of forest from human settlement is included. This work presents the results of spatially explicit analyses of the changes in forest cover from 2000 to 2007 using the method of Geographically Weighted Regression (GWR) and of the species distribution for protected species of Lynx lynx martinoi, Ursus arctos using Generalized Linear Models (GLMs). The methodological approach is separately searching for a parsimonious model for forest cover change and species distribution for the entire territory of Albania. The findings of this work show that modelling of land change and of species distribution is indeed value-added by showing higher values of model selection of corrected Akaike Information Criterion. These results provide evidences on the effects of process-based variables on species distribution modelling and on the performance of species distribution modelling as well as show an example of the incorporation of estimated probability of species occurrences in a land change modelling.
Burgess, Stephen; Daniel, Rhian M; Butterworth, Adam S; Thompson, Simon G
2015-04-01
Mendelian randomization uses genetic variants, assumed to be instrumental variables for a particular exposure, to estimate the causal effect of that exposure on an outcome. If the instrumental variable criteria are satisfied, the resulting estimator is consistent even in the presence of unmeasured confounding and reverse causation. We extend the Mendelian randomization paradigm to investigate more complex networks of relationships between variables, in particular where some of the effect of an exposure on the outcome may operate through an intermediate variable (a mediator). If instrumental variables for the exposure and mediator are available, direct and indirect effects of the exposure on the outcome can be estimated, for example using either a regression-based method or structural equation models. The direction of effect between the exposure and a possible mediator can also be assessed. Methods are illustrated in an applied example considering causal relationships between body mass index, C-reactive protein and uric acid. These estimators are consistent in the presence of unmeasured confounding if, in addition to the instrumental variable assumptions, the effects of both the exposure on the mediator and the mediator on the outcome are homogeneous across individuals and linear without interactions. Nevertheless, a simulation study demonstrates that even considerable heterogeneity in these effects does not lead to bias in the estimates. These methods can be used to estimate direct and indirect causal effects in a mediation setting, and have potential for the investigation of more complex networks between multiple interrelated exposures and disease outcomes. © The Author 2014. Published by Oxford University Press on behalf of the International Epidemiological Association.
Choice Probability Generating Functions
DEFF Research Database (Denmark)
Fosgerau, Mogens; McFadden, Daniel L; Bierlaire, Michel
This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications....
Dixon, Padraig; Davey Smith, George; von Hinke, Stephanie; Davies, Neil M; Hollingworth, William
2016-11-01
Accurate measurement of the marginal healthcare costs associated with different diseases and health conditions is important, especially for increasingly prevalent conditions such as obesity. However, existing observational study designs cannot identify the causal impact of disease on healthcare costs. This paper explores the possibilities for causal inference offered by Mendelian randomization, a form of instrumental variable analysis that uses genetic variation as a proxy for modifiable risk exposures, to estimate the effect of health conditions on cost. Well-conducted genome-wide association studies provide robust evidence of the associations of genetic variants with health conditions or disease risk factors. The subsequent causal effects of these health conditions on cost can be estimated using genetic variants as instruments for the health conditions. This is because the approximately random allocation of genotypes at conception means that many genetic variants are orthogonal to observable and unobservable confounders. Datasets with linked genotypic and resource use information obtained from electronic medical records or from routinely collected administrative data are now becoming available and will facilitate this form of analysis. We describe some of the methodological issues that arise in this type of analysis, which we illustrate by considering how Mendelian randomization could be used to estimate the causal impact of obesity, a complex trait, on healthcare costs. We describe some of the data sources that could be used for this type of analysis. We conclude by considering the challenges and opportunities offered by Mendelian randomization for economic evaluation.
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
Continuous-time random-walk model of transport in variably saturated heterogeneous porous media.
Zoia, Andrea; Néel, Marie-Christine; Cortis, Andrea
2010-03-01
We propose a unified physical framework for transport in variably saturated porous media. This approach allows fluid flow and solute migration to be treated as ensemble averages of fluid and solute particles, respectively. We consider the cases of homogeneous and heterogeneous porous materials. Within a fractal mobile-immobile continuous time random-walk framework, the heterogeneity will be characterized by algebraically decaying particle retention times. We derive the corresponding (nonlinear) continuum-limit partial differential equations and we compare their solutions to Monte Carlo simulation results. The proposed methodology is fairly general and can be used to track fluid and solutes particles trajectories for a variety of initial and boundary conditions.
Choice probability generating functions
DEFF Research Database (Denmark)
Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel
2013-01-01
This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications. The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended...
Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li
2014-01-01
Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158
Nam, Sungsik
2014-08-01
The joint statistics of partial sums of ordered random variables (RVs) are often needed for the accurate performance characterization of a wide variety of wireless communication systems. A unified analytical framework to determine the joint statistics of partial sums of ordered independent and identically distributed (i.i.d.) random variables was recently presented. However, the identical distribution assumption may not be valid in several real-world applications. With this motivation in mind, we consider in this paper the more general case in which the random variables are independent but not necessarily identically distributed (i.n.d.). More specifically, we extend the previous analysis and introduce a new more general unified analytical framework to determine the joint statistics of partial sums of ordered i.n.d. RVs. Our mathematical formalism is illustrated with an application on the exact performance analysis of the capture probability of generalized selection combining (GSC)-based RAKE receivers operating over frequency-selective fading channels with a non-uniform power delay profile. © 1991-2012 IEEE.
van der Zwan, Judith Esi; de Vente, Wieke; Huizink, Anja C; Bögels, Susan M; de Bruin, Esther I
2015-12-01
In contemporary western societies stress is highly prevalent, therefore the need for stress-reducing methods is great. This randomized controlled trial compared the efficacy of self-help physical activity (PA), mindfulness meditation (MM), and heart rate variability biofeedback (HRV-BF) in reducing stress and its related symptoms. We randomly allocated 126 participants to PA, MM, or HRV-BF upon enrollment, of whom 76 agreed to participate. The interventions consisted of psycho-education and an introduction to the specific intervention techniques and 5 weeks of daily exercises at home. The PA exercises consisted of a vigorous-intensity activity of free choice. The MM exercises consisted of guided mindfulness meditation. The HRV-BF exercises consisted of slow breathing with a heart rate variability biofeedback device. Participants received daily reminders for their exercises and were contacted weekly to monitor their progress. They completed questionnaires prior to, directly after, and 6 weeks after the intervention. Results indicated an overall beneficial effect consisting of reduced stress, anxiety and depressive symptoms, and improved psychological well-being and sleep quality. No significant between-intervention effect was found, suggesting that PA, MM, and HRV-BF are equally effective in reducing stress and its related symptoms. These self-help interventions provide easily accessible help for people with stress complaints.
Directory of Open Access Journals (Sweden)
Dug Hun Hong
1999-01-01
Full Text Available Let {Xij} be a double sequence of pairwise independent random variables. If P{|Xmn|≥t}≤P{|X|≥t} for all nonnegative real numbers t and E|X|p(log+|X|3<∞, for 1
random variables under the conditions E|X|p(log+|X|r+1<∞,E|X|p(log+|X|r−1<∞, respectively, thus, extending Choi and Sung's result [1] of the one-dimensional case.
Directory of Open Access Journals (Sweden)
Lindsay S. Nagamatsu
2013-01-01
Full Text Available We report secondary findings from a randomized controlled trial on the effects of exercise on memory in older adults with probable MCI. We randomized 86 women aged 70–80 years with subjective memory complaints into one of three groups: resistance training, aerobic training, or balance and tone (control. All participants exercised twice per week for six months. We measured verbal memory and learning using the Rey Auditory Verbal Learning Test (RAVLT and spatial memory using a computerized test, before and after trial completion. We found that the aerobic training group remembered significantly more items in the loss after interference condition of the RAVLT compared with the control group after six months of training. In addition, both experimental groups showed improved spatial memory performance in the most difficult condition where they were required to memorize the spatial location of three items, compared with the control group. Lastly, we found a significant correlation between spatial memory performance and overall physical capacity after intervention in the aerobic training group. Taken together, our results provide support for the prevailing notion that exercise can positively impact cognitive functioning and may represent an effective strategy to improve memory in those who have begun to experience cognitive decline.
DEFF Research Database (Denmark)
Asmussen, Søren; Albrecher, Hansjörg
, extensions of the classical compound Poisson model to allow for reserve-dependent premiums, Markov-modulation, periodicity, change of measure techniques, phase-type distributions as a computational vehicle and the connection to other applied probability areas, like queueing theory. In this substantially......The book gives a comprehensive treatment of the classical and modern ruin probability theory. Some of the topics are Lundberg's inequality, the Cramér-Lundberg approximation, exact solutions, other approximations (e.g., for heavy-tailed claim size distributions), finite horizon ruin probabilities...
DEFF Research Database (Denmark)
Asmussen, Søren; Albrecher, Hansjörg
The book gives a comprehensive treatment of the classical and modern ruin probability theory. Some of the topics are Lundberg's inequality, the Cramér-Lundberg approximation, exact solutions, other approximations (e.g., for heavy-tailed claim size distributions), finite horizon ruin probabilities...... updated and extended second version, new topics include stochastic control, fluctuation theory for Levy processes, Gerber–Shiu functions and dependence......., extensions of the classical compound Poisson model to allow for reserve-dependent premiums, Markov-modulation, periodicity, change of measure techniques, phase-type distributions as a computational vehicle and the connection to other applied probability areas, like queueing theory. In this substantially...
Earth Data Analysis Center, University of New Mexico — USFS, State Forestry, BLM, and DOI fire occurrence point locations from 1987 to 2008 were combined and converted into a fire occurrence probability or density grid...
Directory of Open Access Journals (Sweden)
Laila Mohamed Khodeir
2015-04-01
Full Text Available The aim of this paper is to identify the latest top major risk probabilities in construction projects in Egypt according to political and economic variables between the time period Jan 2011 and Jan 2013. Risks were prioritized according to their significance of influences and their sources, whether internal or external. The paper describes, on the basis of a questionnaire survey of project management practices, the construction risks, which are generally perceived as events that influence project objectives of cost, time and quality. A statistical analysis was carried out in order to identify the top major construction project risks. It is concluded that however risk factors vary considerably across industry and countries, the study of risk management for construction projects will provide a reference for other projects that might be executed in similar circumstances. The paper ends up with suggesting the risk response strategies appropriate for each type of identified risk. The research findings will contribute to both practice and research in risk management for the Egyptian construction industry and will also provide valuable information for international companies which intend to provide construction projects in Egypt.
Probability theory a comprehensive course
Klenke, Achim
2014-01-01
This second edition of the popular textbook contains a comprehensive course in modern probability theory. Overall, probabilistic concepts play an increasingly important role in mathematics, physics, biology, financial engineering and computer science. They help us in understanding magnetism, amorphous media, genetic diversity and the perils of random developments at financial markets, and they guide us in constructing more efficient algorithms. To address these concepts, the title covers a wide variety of topics, many of which are not usually found in introductory textbooks, such as: • limit theorems for sums of random variables • martingales • percolation • Markov chains and electrical networks • construction of stochastic processes • Poisson point process and infinite divisibility • large deviation principles and statistical physics • Brownian motion • stochastic integral and stochastic differential equations. The theory is developed rigorously and in a self-contained way, with the c...
Introduction to probability with statistical applications
Schay, Géza
2016-01-01
Now in its second edition, this textbook serves as an introduction to probability and statistics for non-mathematics majors who do not need the exhaustive detail and mathematical depth provided in more comprehensive treatments of the subject. The presentation covers the mathematical laws of random phenomena, including discrete and continuous random variables, expectation and variance, and common probability distributions such as the binomial, Poisson, and normal distributions. More classical examples such as Montmort's problem, the ballot problem, and Bertrand’s paradox are now included, along with applications such as the Maxwell-Boltzmann and Bose-Einstein distributions in physics. Key features in new edition: * 35 new exercises * Expanded section on the algebra of sets * Expanded chapters on probabilities to include more classical examples * New section on regression * Online instructors' manual containing solutions to all exercises
Directory of Open Access Journals (Sweden)
Franciele R Figueira
Full Text Available To evaluate the effects of aerobic (AER or aerobic plus resistance exercise (COMB sessions on glucose levels and glucose variability in patients with type 2 diabetes. Additionally, we assessed conventional and non-conventional methods to analyze glucose variability derived from multiple measurements performed with continuous glucose monitoring system (CGMS.Fourteen patients with type 2 diabetes (56±2 years wore a CGMS during 3 days. Participants randomly performed AER and COMB sessions, both in the morning (24 h after CGMS placement, and at least 7 days apart. Glucose variability was evaluated by glucose standard deviation, glucose variance, mean amplitude of glycemic excursions (MAGE, and glucose coefficient of variation (conventional methods as well as by spectral and symbolic analysis (non-conventional methods.Baseline fasting glycemia was 139±05 mg/dL and HbA1c 7.9±0.7%. Glucose levels decreased immediately after AER and COMB protocols by ∼16%, which was sustained for approximately 3 hours. Comparing the two exercise modalities, responses over a 24-h period after the sessions were similar for glucose levels, glucose variance and glucose coefficient of variation. In the symbolic analysis, increases in 0 V pattern (COMB, 67.0±7.1 vs. 76.0±6.3, P = 0.003 and decreases in 1 V pattern (COMB, 29.1±5.3 vs. 21.5±5.1, P = 0.004 were observed only after the COMB session.Both AER and COMB exercise modalities reduce glucose levels similarly for a short period of time. The use of non-conventional analysis indicates reduction of glucose variability after a single session of combined exercises.Aerobic training, aerobic-resistance training and glucose profile (CGMS in type 2 diabetes (CGMS exercise. ClinicalTrials.gov ID: NCT00887094.
Directory of Open Access Journals (Sweden)
Mário Mestria
2014-11-01
Full Text Available In this paper, we propose new heuristic methods for solver the Clustered Traveling Salesman Problem (CTSP. The CTSP is a generalization of the Traveling Salesman Problem (TSP in which the set of vertices is partitioned into disjoint clusters and objective is to find a minimum cost Hamiltonian cycle such that the vertices of each cluster are visited contiguously. We develop two Variable Neighborhood Random Descent with Iterated Local for solver the CTSP. The heuristic methods proposed were tested in types of instances with data at different level of granularity for the number of vertices and clusters. The computational results showed that the heuristic methods outperform recent existing methods in the literature and they are competitive with an exact algorithm using the Parallel CPLEX software.
Visualization techniques for spatial probability density function data
Directory of Open Access Journals (Sweden)
Udeepta D Bordoloi
2006-01-01
Full Text Available Novel visualization methods are presented for spatial probability density function data. These are spatial datasets, where each pixel is a random variable, and has multiple samples which are the results of experiments on that random variable. We use clustering as a means to reduce the information contained in these datasets; and present two different ways of interpreting and clustering the data. The clustering methods are used on two datasets, and the results are discussed with the help of visualization techniques designed for the spatial probability data.
Quantum Probabilities as Behavioral Probabilities
Directory of Open Access Journals (Sweden)
Vyacheslav I. Yukalov
2017-03-01
Full Text Available We demonstrate that behavioral probabilities of human decision makers share many common features with quantum probabilities. This does not imply that humans are some quantum objects, but just shows that the mathematics of quantum theory is applicable to the description of human decision making. The applicability of quantum rules for describing decision making is connected with the nontrivial process of making decisions in the case of composite prospects under uncertainty. Such a process involves deliberations of a decision maker when making a choice. In addition to the evaluation of the utilities of considered prospects, real decision makers also appreciate their respective attractiveness. Therefore, human choice is not based solely on the utility of prospects, but includes the necessity of resolving the utility-attraction duality. In order to justify that human consciousness really functions similarly to the rules of quantum theory, we develop an approach defining human behavioral probabilities as the probabilities determined by quantum rules. We show that quantum behavioral probabilities of humans do not merely explain qualitatively how human decisions are made, but they predict quantitative values of the behavioral probabilities. Analyzing a large set of empirical data, we find good quantitative agreement between theoretical predictions and observed experimental data.
Multiple decomposability of probabilities on contractible locally ...
Indian Academy of Sciences (India)
Definition 3.1). As mentioned before, μ is n-times τ-decomposable iff μ has a representation as (n + 1)-times iterated convolution product. To be allowed to ..... Then the classical version of the equivalence theorem holds: If νi , i ≥ 0, ν, are probabilities and Xi ,i ≥ 0, Y are independent G-valued random variables with ...
Chu, I-Hua; Wu, Wen-Lan; Lin, I-Mei; Chang, Yu-Kai; Lin, Yuh-Jen; Yang, Pin-Chen
2017-04-01
The purpose of the study was to investigate the effects of a 12-week yoga program on heart rate variability (HRV) and depressive symptoms in depressed women. This was a randomized controlled trial. Twenty-six sedentary women scoring ≥14 on the Beck Depression Inventory-II were randomized to either the yoga or the control group. The yoga group completed a 12-week yoga program, which took place twice a week for 60 min per session and consisted of breathing exercises, yoga pose practice, and supine meditation/relaxation. The control group was instructed not to engage in any yoga practice and to maintain their usual level of physical activity during the course of the study. Participants' HRV, depressive symptoms, and perceived stress were assessed at baseline and post-test. The yoga group had a significant increase in high-frequency HRV and decreases in low-frequency HRV and low frequency/high frequency ratio after the intervention. The yoga group also reported significantly reduced depressive symptoms and perceived stress. No change was found in the control group. A 12-week yoga program was effective in increasing parasympathetic tone and reducing depressive symptoms and perceived stress in women with elevated depressive symptoms. Regular yoga practice may be recommended for women to cope with their depressive symptoms and stress and to improve their HRV.
Sawane, Manish Vinayak; Gupta, Shilpa Sharad
2015-01-01
Resting heart rate variability (HRV) is a measure of the modulation of autonomic nervous system (ANS) at rest. Increased HRV achieved by the exercise is good for the cardiovascular health. However, prospective studies with comparison of the effects of yogic exercises and those of other endurance exercises like walking, running, and swimming on resting HRV are conspicuous by their absence. Study was designed to assess and compare the effects of yogic training and swimming on resting HRV in normal healthy young volunteers. Study was conducted in Department of Physiology in a medical college. Study design was prospective randomized comparative trial. One hundred sedentary volunteers were randomly ascribed to either yoga or swimming group. Baseline recordings of digital electrocardiogram were done for all the subjects in cohorts of 10. After yoga training and swimming for 12 weeks, evaluation for resting HRV was done again. Percentage change for each parameter with yoga and swimming was compared using unpaired t-test for data with normal distribution and using Mann-Whitney U test for data without normal distribution. Most of the HRV parameters improved statistically significantly by both modalities of exercise. However, some of the HRV parameters showed statistically better improvement with yoga as compared to swimming. Practicing yoga seems to be the mode of exercise with better improvement in autonomic functions as suggested by resting HRV.
S Varadhan, S R
2001-01-01
This volume presents topics in probability theory covered during a first-year graduate course given at the Courant Institute of Mathematical Sciences. The necessary background material in measure theory is developed, including the standard topics, such as extension theorem, construction of measures, integration, product spaces, Radon-Nikodym theorem, and conditional expectation. In the first part of the book, characteristic functions are introduced, followed by the study of weak convergence of probability distributions. Then both the weak and strong limit theorems for sums of independent rando
Considerations on probability: from games of chance to modern science
Directory of Open Access Journals (Sweden)
Paola Monari
2015-12-01
Full Text Available The article sets out a number of considerations on the distinction between variability and uncertainty over the centuries. Games of chance have always been useful random experiments which through combinatorial calculation have opened the way to probability theory and to the interpretation of modern science through statistical laws. The article also looks briefly at the stormy nineteenth-century debate concerning the definitions of probability which went over the same grounds – sometimes without any historical awareness – as the debate which arose at the very beginnings of probability theory, when the great probability theorists were open to every possible meaning of the term.
Tchetgen Tchetgen, Eric J; Wirth, Kathleen E
2017-02-23
The instrumental variable (IV) design is a well-known approach for unbiased evaluation of causal effects in the presence of unobserved confounding. In this article, we study the IV approach to account for selection bias in regression analysis with outcome missing not at random. In such a setting, a valid IV is a variable which (i) predicts the nonresponse process, and (ii) is independent of the outcome in the underlying population. We show that under the additional assumption (iii) that the IV is independent of the magnitude of selection bias due to nonresponse, the population regression in view is nonparametrically identified. For point estimation under (i)-(iii), we propose a simple complete-case analysis which modifies the regression of primary interest by carefully incorporating the IV to account for selection bias. The approach is developed for the identity, log and logit link functions. For inferences about the marginal mean of a binary outcome assuming (i) and (ii) only, we describe novel and approximately sharp bounds which unlike Robins-Manski bounds, are smooth in model parameters, therefore allowing for a straightforward approach to account for uncertainty due to sampling variability. These bounds provide a more honest account of uncertainty and allows one to assess the extent to which a violation of the key identifying condition (iii) might affect inferences. For illustration, the methods are used to account for selection bias induced by HIV testing nonparticipation in the evaluation of HIV prevalence in the Zambian Demographic and Health Surveys. © 2017, The International Biometric Society.
Jones, Salene M W; Guthrie, Katherine A; Reed, Susan D; Landis, Carol A; Sternfeld, Barbara; LaCroix, Andrea Z; Dunn, Andrea; Burr, Robert L; Newton, Katherine M
2016-06-01
Heart rate variability (HRV) reflects the integration of the parasympathetic nervous system with the rest of the body. Studies on the effects of yoga and exercise on HRV have been mixed but suggest that exercise increases HRV. We conducted a secondary analysis of the effect of yoga and exercise on HRV based on a randomized clinical trial of treatments for vasomotor symptoms in peri/post-menopausal women. Randomized clinical trial of behavioral interventions in women with vasomotor symptoms (n=335), 40-62 years old from three clinical study sites. 12-weeks of a yoga program, designed specifically for mid-life women, or a supervised aerobic exercise-training program with specific intensity and energy expenditure goals, compared to a usual activity group. Time and frequency domain HRV measured at baseline and at 12 weeks for 15min using Holter monitors. Women had a median of 7.6 vasomotor symptoms per 24h. Time and frequency domain HRV measures did not change significantly in either of the intervention groups compared to the change in the usual activity group. HRV results did not differ when the analyses were restricted to post-menopausal women. Although yoga and exercise have been shown to increase parasympathetic-mediated HRV in other populations, neither intervention increased HRV in middle-aged women with vasomotor symptoms. Mixed results in previous research may be due to sample differences. Yoga and exercise likely improve short-term health in middle-aged women through mechanisms other than HRV. Copyright © 2016 Elsevier Ltd. All rights reserved.
Information-theoretic methods for estimating of complicated probability distributions
Zong, Zhi
2006-01-01
Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur
Probability, statistics, and reliability for engineers and scientists
Ayyub, Bilal M
2012-01-01
IntroductionIntroduction Knowledge, Information, and Opinions Ignorance and Uncertainty Aleatory and Epistemic Uncertainties in System Abstraction Characterizing and Modeling Uncertainty Simulation for Uncertainty Analysis and Propagation Simulation Projects Data Description and TreatmentIntroduction Classification of Data Graphical Description of Data Histograms and Frequency Diagrams Descriptive Measures Applications Analysis of Simulated Data Simulation Projects Fundamentals of ProbabilityIntroduction Sets, Sample Spaces, and EventsMathematics of Probability Random Variables and Their Proba
How Far Is Quasar UV/Optical Variability from a Damped Random Walk at Low Frequency?
Guo, Hengxiao; Wang, Junxian; Cai, Zhenyi; Sun, Mouyuan
2017-10-01
Studies have shown that UV/optical light curves of quasars can be described using the prevalent damped random walk (DRW) model, also known as the Ornstein-Uhlenbeck process. A white noise power spectral density (PSD) is expected at low frequency in this model; however, a direct observational constraint to the low-frequency PSD slope is difficult due to the limited lengths of the light curves available. Meanwhile, quasars show scatter in their DRW parameters that is too large to be attributed to uncertainties in the measurements and dependence on the variation of known physical factors. In this work we present simulations showing that, if the low-frequency PSD deviates from the DRW, the red noise leakage can naturally produce large scatter in the variation parameters measured from simulated light curves. The steeper the low-frequency PSD slope, the larger scatter we expect. Based on observations of SDSS Stripe 82 quasars, we find that the low-frequency PSD slope should be no steeper than -1.3. The actual slope could be flatter, which consequently requires that the quasar variabilities should be influenced by other unknown factors. We speculate that the magnetic field and/or metallicity could be such additional factors.
Log-concave Probability Distributions: Theory and Statistical Testing
DEFF Research Database (Denmark)
An, Mark Yuing
1996-01-01
This paper studies the broad class of log-concave probability distributions that arise in economics of uncertainty and information. For univariate, continuous, and log-concave random variables we prove useful properties without imposing the differentiability of density functions. Discrete...
RIJCKEN, B; SCHOUTEN, JP; WEISS, ST; ROSNER, B; DEVRIES, K; VANDERLENDE, R
1993-01-01
Long-term variability of bronchial responsiveness has been studied in a random population sample of adults. During a follow-up period of 18 yr, 2,216 subjects contributed 5,012 observations to the analyses. Each subject could have as many as seven observations. Bronchial responsiveness was assessed
USING THE WEB-SERVICES WOLFRAM|ALPHA TO SOLVE PROBLEMS IN PROBABILITY THEORY
Directory of Open Access Journals (Sweden)
Taras Kobylnyk
2015-10-01
Full Text Available The trend towards the use of remote network resources on the Internet clearly delineated. Traditional training combined with increasingly networked, remote technologies become popular cloud computing. Research methods of probability theory are used in various fields. Of particular note is the use of methods of probability theory in psychological and educational research in statistical analysis of experimental data. Conducting such research is impossible without the use of modern information technology. Given the advantages of web-based software, the article describes web-service Wolfram|Alpha. Detailed analysis of the possibilities of using web-service Wolfram|Alpha for solving problems of probability theory. In the case studies described the results of queries for solving of probability theory, in particular the sections random events and random variables. Considered and analyzed the problem of the number of occurrences of event A in n independent trials using Wolfram|Alpha, detailed analysis of the possibilities of using the service Wolfram|Alpha for the study of continuous random variable that has a normal and uniform probability distribution, including calculating the probability of getting the value of a random variable in a given interval. The problem in applying the binomial and hypergeometric probability distribution of a discrete random variable and demonstrates the possibility of using the service Wolfram|Alpha for solving it.
Probability theory and statistical applications a profound treatise for self-study
Zörnig, Peter
2016-01-01
This accessible and easy-to-read book provides many examples to illustrate diverse topics in probability and statistics, from initial concepts up to advanced calculations. Special attention is devoted e.g. to independency of events, inequalities in probability and functions of random variables. The book is directed to students of mathematics, statistics, engineering, and other quantitative sciences.
Deal, J. H.
1975-01-01
One approach to the problem of simplifying complex nonlinear filtering algorithms is through using stratified probability approximations where the continuous probability density functions of certain random variables are represented by discrete mass approximations. This technique is developed in this paper and used to simplify the filtering algorithms developed for the optimum receiver for signals corrupted by both additive and multiplicative noise.
Spieth, Peter M; Güldner, Andreas; Uhlig, Christopher; Bluth, Thomas; Kiss, Thomas; Schultz, Marcus J; Pelosi, Paolo; Koch, Thea; Gama de Abreu, Marcelo
2014-05-02
General anesthesia usually requires mechanical ventilation, which is traditionally accomplished with constant tidal volumes in volume- or pressure-controlled modes. Experimental studies suggest that the use of variable tidal volumes (variable ventilation) recruits lung tissue, improves pulmonary function and reduces systemic inflammatory response. However, it is currently not known whether patients undergoing open abdominal surgery might benefit from intraoperative variable ventilation. The PROtective VARiable ventilation trial ('PROVAR') is a single center, randomized controlled trial enrolling 50 patients who are planning for open abdominal surgery expected to last longer than 3 hours. PROVAR compares conventional (non-variable) lung protective ventilation (CV) with variable lung protective ventilation (VV) regarding pulmonary function and inflammatory response. The primary endpoint of the study is the forced vital capacity on the first postoperative day. Secondary endpoints include further lung function tests, plasma cytokine levels, spatial distribution of ventilation assessed by means of electrical impedance tomography and postoperative pulmonary complications. We hypothesize that VV improves lung function and reduces systemic inflammatory response compared to CV in patients receiving mechanical ventilation during general anesthesia for open abdominal surgery longer than 3 hours. PROVAR is the first randomized controlled trial aiming at intra- and postoperative effects of VV on lung function. This study may help to define the role of VV during general anesthesia requiring mechanical ventilation. Clinicaltrials.gov NCT01683578 (registered on September 3 3012).
Choice probability generating functions
DEFF Research Database (Denmark)
Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel
2010-01-01
This paper establishes that every random utility discrete choice model (RUM) has a representation that can be characterized by a choice-probability generating function (CPGF) with specific properties, and that every function with these specific properties is consistent with a RUM. The choice...... probabilities from the RUM are obtained from the gradient of the CPGF. Mixtures of RUM are characterized by logarithmic mixtures of their associated CPGF. The paper relates CPGF to multivariate extreme value distributions, and reviews and extends methods for constructing generating functions for applications....... The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended to competing risk survival models....
Measurement uncertainty and probability
National Research Council Canada - National Science Library
Willink, Robin
2013-01-01
... and probability models 3.4 Inference and confidence 3.5 Two central limit theorems 3.6 The Monte Carlo method and process simulation 4 The randomization of systematic errors page xi xii 3 3 5 7 10 12 16 19 21 21 23 28 30 32 33 39 43 45 52 53 56 viiviii 4.1 4.2 4.3 4.4 4.5 Contents The Working Group of 1980 From classical repetition to practica...
Isaac, Richard
1995-01-01
The ideas of probability are all around us. Lotteries, casino gambling, the al most non-stop polling which seems to mold public policy more and more these are a few of the areas where principles of probability impinge in a direct way on the lives and fortunes of the general public. At a more re moved level there is modern science which uses probability and its offshoots like statistics and the theory of random processes to build mathematical descriptions of the real world. In fact, twentieth-century physics, in embrac ing quantum mechanics, has a world view that is at its core probabilistic in nature, contrary to the deterministic one of classical physics. In addition to all this muscular evidence of the importance of probability ideas it should also be said that probability can be lots of fun. It is a subject where you can start thinking about amusing, interesting, and often difficult problems with very little mathematical background. In this book, I wanted to introduce a reader with at least a fairl...
DEFF Research Database (Denmark)
Nielsen, Erland Hejn
2003-01-01
of the estimation of the probability of critically delayed delivery beyond a specified threshold value given a certain production batch size and try to establish a relation to certain parameters that can be linked to the degree of regularity of the arrival stream of parts to the job/flow-shop. This last aspect...... relates remotely to the Lean Thinking philosophy that praises the smooth and uninterrupted production flow to be beneficial to the overall operation of productive plants in general, and we will link our findings to this discussion as well....
DEFF Research Database (Denmark)
Nielsen, Erland Hejn
2003-01-01
this embedded in a multi-part production set-up. Standard simulation methodology relies heavily on the Central Limit Theorem, but as powerful this statistical concept might be it has its pitfalls that as will be shown in this work it can be quite deceptive and consequently harmful. We will focus our discussion...... of the estimation of the probability of critically delayed delivery beyond a specified threshold value given a certain production batch size and try to establish a relation to certain parameters that can be linked to the degree of regularity of the arrival stream of parts to the job/flow-shop. This last aspect...
Kiss, Thomas; Güldner, Andreas; Bluth, Thomas; Uhlig, Christopher; Spieth, Peter Markus; Markstaller, Klaus; Ullrich, Roman; Jaber, Samir; Santos, Jose Alberto; Mancebo, Jordi; Camporota, Luigi; Beale, Richard; Schettino, Guilherme; Saddy, Felipe; Vallverdú, Immaculada; Wiedemann, Bärbel; Koch, Thea; Schultz, Marcus Josephus; Pelosi, Paolo; de Abreu, Marcelo Gama
2013-10-31
In pressure support ventilation (PSV), a non-variable level of pressure support is delivered by the ventilator when triggered by the patient. In contrast, variable PSV delivers a level of pressure support that varies in a random fashion, introducing more physiological variability to the respiratory pattern. Experimental studies show that variable PSV improves gas exchange, reduces lung inflammation and the mean pressure support, compared to non-variable PSV. Thus, it can theoretically shorten weaning from the mechanical ventilator. The ViPS (variable pressure support) trial is an international investigator-initiated multicenter randomized controlled open trial comparing variable vs. non-variable PSV. Adult patients on controlled mechanical ventilation for more than 24 hours who are ready to be weaned are eligible for the study. The randomization sequence is blocked per center and performed using a web-based platform. Patients are randomly assigned to one of the two groups: variable PSV or non-variable PSV. In non-variable PSV, breath-by-breath pressure support is kept constant and targeted to achieve a tidal volume of 6 to 8 ml/kg. In variable PSV, the mean pressure support level over a specific time period is targeted at the same mean tidal volume as non-variable PSV, but individual levels vary randomly breath-by-breath. The primary endpoint of the trial is the time to successful weaning, defined as the time from randomization to successful extubation. ViPS is the first randomized controlled trial investigating whether variable, compared to non-variable PSV, shortens the duration of weaning from mechanical ventilation in a mixed population of critically ill patients. This trial aims to determine the role of variable PSV in the intensive care unit. clinicaltrials.gov NCT01769053.
National Research Council Canada - National Science Library
Baoheng Gui; Jesse Slone; Taosheng Huang
2017-01-01
Several factors have been proposed as contributors to interfamilial and intrafamilial phenotypic variability in autosomal dominant disorders, including allelic variation, modifier genes, environmental...
Genetic variability of cultivated cowpea in Benin assessed by random amplified polymorphic DNA
Zannou, A.; Kossou, D.K.; Ahanchédé, A.; Zoundjihékpon, J.; Agbicodo, E.; Struik, P.C.; Sanni, A.
2008-01-01
Characterization of genetic diversity among cultivated cowpea [Vigna unguiculata (L.) Walp.] varieties is important to optimize the use of available genetic resources by farmers, local communities, researchers and breeders. Random amplified polymorphic DNA (RAPD) markers were used to evaluate the
Directory of Open Access Journals (Sweden)
Ming He
2015-11-01
Full Text Available We propose a random effects panel data model with both spatially correlated error components and spatially lagged dependent variables. We focus on diagnostic testing procedures and derive Lagrange multiplier (LM test statistics for a variety of hypotheses within this model. We first construct the joint LM test for both the individual random effects and the two spatial effects (spatial error correlation and spatial lag dependence. We then provide LM tests for the individual random effects and for the two spatial effects separately. In addition, in order to guard against local model misspecification, we derive locally adjusted (robust LM tests based on the Bera and Yoon principle (Bera and Yoon, 1993. We conduct a small Monte Carlo simulation to show the good finite sample performances of these LM test statistics and revisit the cigarette demand example in Baltagi and Levin (1992 to illustrate our testing procedures.
Directory of Open Access Journals (Sweden)
Penzlin AI
2015-10-01
Full Text Available Ana Isabel Penzlin,1 Timo Siepmann,2 Ben Min-Woo Illigens,3 Kerstin Weidner,4 Martin Siepmann4 1Institute of Clinical Pharmacology, 2Department of Neurology, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Saxony, Germany; 3Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA; 4Department of Psychotherapy and Psychosomatic Medicine, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Saxony, Germany Background and objective: In patients with alcohol dependence, ethyl-toxic damage of vasomotor and cardiac autonomic nerve fibers leads to autonomic imbalance with neurovascular and cardiac dysfunction, the latter resulting in reduced heart rate variability (HRV. Autonomic imbalance is linked to increased craving and cardiovascular mortality. In this study, we sought to assess the effects of HRV biofeedback training on HRV, vasomotor function, craving, and anxiety. Methods: We conducted a randomized controlled study in 48 patients (14 females, ages 25–59 years undergoing inpatient rehabilitation treatment. In the treatment group, patients (n=24 attended six sessions of HRV biofeedback over 2 weeks in addition to standard rehabilitative care, whereas, in the control group, subjects received standard care only. Psychometric testing for craving (Obsessive Compulsive Drinking Scale, anxiety (Symptom Checklist-90-Revised, HRV assessment using coefficient of variation of R-R intervals (CVNN analysis, and vasomotor function assessment using laser Doppler flowmetry were performed at baseline, immediately after completion of treatment or control period, and 3 and 6 weeks afterward (follow-ups 1 and 2. Results: Psychometric testing showed decreased craving in the biofeedback group immediately postintervention (OCDS scores: 8.6±7.9 post-biofeedback versus 13.7±11.0 baseline [mean ± standard deviation], P<0.05, whereas craving was unchanged at
Hallas, Jesper; Pottegård, Anton; Støvring, Henrik
2017-12-01
In register-based pharmacoepidemiological studies, each day of follow-up is usually categorized either as exposed or unexposed. However, there is an underlying continuous probability of exposure, and by insisting on a dichotomy, researchers unwillingly force a nondifferential misclassification into their analyses. We have recently developed a model whereby probability of exposure can be modeled, and we tested this on an empirical case of nonsteroidal anti-inflammatory drug (NSAID)-induced upper gastrointestinal bleeding (UGIB). We used a case-controls data set, consisting of 3568 cases of severe UGIB and 35 552 matched controls. Exposure to NSAID was based on 3 different conventional dichotomous measures. In addition, we tested 3 probabilistic exposure measures, a simple univariate backward-recurrence model, a "full" multivariable model, and a "reduced" multivariable model. Odds ratios (ORs) and 95% confidence intervals for the association between NSAID use and UGIB were calculated by conditional logistic regression, while adjusting for preselected confounders. Compared to the conventional dichotomous exposure measures, the probabilistic exposure measures generated adjusted ORs in the upper range (4.37-4.75) while at the same time having the most narrow confidence intervals (ratio between upper and lower confidence limit, 1.46-1.50). Some ORs generated by conventional measures were higher than the probabilistic ORs, but only when the assumed period of intake was unrealistically short. The pattern of high ORs and narrow confidence intervals in probabilistic exposure measures is compatible with less nondifferential misclassification of exposure than in a dichotomous exposure model. Probabilistic exposure measures appear to be an attractive alternative to conventional exposure measures. Copyright © 2017 John Wiley & Sons, Ltd.
Stochastic invertible mappings between power law and Gaussian probability distributions
Vignat, C.; Plastino, A.
2005-01-01
We construct "stochastic mappings" between power law probability distributions (PD's) and Gaussian ones. To a given vector $N$, Gaussian distributed (respectively $Z$, exponentially distributed), one can associate a vector $X$, "power law distributed", by multiplying $X$ by a random scalar variable $a$, $N= a X$. This mapping is "invertible": one can go via multiplication by another random variable $b$ from $X$ to $N$ (resp. from $X$ to $Z$), i.e., $X=b N$ (resp. $X=b Z$). Note that all the a...
Czech Academy of Sciences Publication Activity Database
Rusticucci, M.; Kyselý, Jan; Almeira, G.; Lhotka, Ondřej
2016-01-01
Roč. 124, č. 3 (2016), s. 679-689 ISSN 0177-798X R&D Projects: GA MŠk 7AMB15AR001 Institutional support: RVO:68378289 Keywords : heat waves * long-term variability * climate extremes Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 2.640, year: 2016 http://link.springer.com/article/10.1007%2Fs00704-015-1445-7
Probability density of quantum expectation values
Campos Venuti, L.; Zanardi, P.
2013-10-01
We consider the quantum expectation value A= of an observable A over the state |ψ>. We derive the exact probability distribution of A seen as a random variable when |ψ> varies over the set of all pure states equipped with the Haar-induced measure. To illustrate our results we compare the exact predictions for few concrete examples with the concentration bounds obtained using Levy's lemma. We also comment on the relevance of the central limit theorem and finally draw some results on an alternative statistical mechanics based on the uniform measure on the energy shell.
A practical overview on probability distributions
Viti, Andrea; Terzi, Alberto; Bertolaccini, Luca
2015-01-01
Aim of this paper is a general definition of probability, of its main mathematical features and the features it presents under particular circumstances. The behavior of probability is linked to the features of the phenomenon we would predict. This link can be defined probability distribution. Given the characteristics of phenomena (that we can also define variables), there are defined probability distribution. For categorical (or discrete) variables, the probability can be described by a bino...
Oracle Efficient Variable Selection in Random and Fixed Effects Panel Data Models
DEFF Research Database (Denmark)
Kock, Anders Bredahl
, we prove that the Marginal Bridge estimator can asymptotically correctly distinguish between relevant and irrelevant explanatory variables. We do this without restricting the dependence between covariates and without assuming sub Gaussianity of the error terms thereby generalizing the results...
Energy Technology Data Exchange (ETDEWEB)
Oldenburg, Curtis M. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Budnitz, Robert J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2016-08-31
If Carbon dioxide Capture and Storage (CCS) is to be effective in mitigating climate change, it will need to be carried out on a very large scale. This will involve many thousands of miles of dedicated high-pressure pipelines in order to transport many millions of tonnes of CO_{2} annually, with the CO_{2} delivered to many thousands of wells that will inject the CO_{2} underground. The new CCS infrastructure could rival in size the current U.S. upstream natural gas pipeline and well infrastructure. This new infrastructure entails hazards for life, health, animals, the environment, and natural resources. Pipelines are known to rupture due to corrosion, from external forces such as impacts by vehicles or digging equipment, by defects in construction, or from the failure of valves and seals. Similarly, wells are vulnerable to catastrophic failure due to corrosion, cement degradation, or operational mistakes. While most accidents involving pipelines and wells will be minor, there is the inevitable possibility of accidents with very high consequences, especially to public health. The most important consequence of concern is CO_{2} release to the environment in concentrations sufficient to cause death by asphyxiation to nearby populations. Such accidents are thought to be very unlikely, but of course they cannot be excluded, even if major engineering effort is devoted (as it will be) to keeping their probability low and their consequences minimized. This project has developed a methodology for analyzing the risks of these rare but high-consequence accidents, using a step-by-step probabilistic methodology. A key difference between risks for pipelines and wells is that the former are spatially distributed along the pipe whereas the latter are confined to the vicinity of the well. Otherwise, the methodology we develop for risk assessment of pipeline and well failures is similar and provides an analysis both of the annual probabilities of
Probability machines: consistent probability estimation using nonparametric learning machines.
Malley, J D; Kruppa, J; Dasgupta, A; Malley, K G; Ziegler, A
2012-01-01
Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications.
Probability Machines: Consistent Probability Estimation Using Nonparametric Learning Machines
Malley, J. D.; Kruppa, J.; Dasgupta, A.; Malley, K. G.; Ziegler, A.
2011-01-01
Summary Background Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. Objectives The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Methods Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Results Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Conclusions Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications. PMID:21915433
von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo
2014-06-01
Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.
Boezen, H M; Schouten, J. P.; Postma, D S; Rijcken, B
1994-01-01
Peak expiratory flow (PEF) variability can be considered as an index of bronchial lability. Population studies on PEF variability are few. The purpose of the current paper is to describe the distribution of PEF variability in a random population sample of adults with a wide age range (20-70 yrs),
Kiss, Thomas; Güldner, Andreas; Bluth, Thomas; Uhlig, Christopher; Spieth, Peter Markus; Markstaller, Klaus; Ullrich, Roman; Jaber, Samir; Santos, Jose Alberto; Mancebo, Jordi; Camporota, Luigi; Beale, Richard; Schettino, Guilherme; Saddy, Felipe; Vallverdú, Immaculada; Wiedemann, Bärbel; Koch, Thea; Schultz, Marcus Josephus; Pelosi, Paolo; de Abreu, Marcelo Gama
2013-01-01
In pressure support ventilation (PSV), a non-variable level of pressure support is delivered by the ventilator when triggered by the patient. In contrast, variable PSV delivers a level of pressure support that varies in a random fashion, introducing more physiological variability to the respiratory
Carpena, Pedro; Bernaola-Galván, Pedro A; Carretero-Campos, Concepción; Coronado, Ana V
2016-11-01
Symbolic sequences have been extensively investigated in the past few years within the framework of statistical physics. Paradigmatic examples of such sequences are written texts, and deoxyribonucleic acid (DNA) and protein sequences. In these examples, the spatial distribution of a given symbol (a word, a DNA motif, an amino acid) is a key property usually related to the symbol importance in the sequence: The more uneven and far from random the symbol distribution, the higher the relevance of the symbol to the sequence. Thus, many techniques of analysis measure in some way the deviation of the symbol spatial distribution with respect to the random expectation. The problem is then to know the spatial distribution corresponding to randomness, which is typically considered to be either the geometric or the exponential distribution. However, these distributions are only valid for very large symbolic sequences and for many occurrences of the analyzed symbol. Here, we obtain analytically the exact, randomly expected spatial distribution valid for any sequence length and any symbol frequency, and we study its main properties. The knowledge of the distribution allows us to define a measure able to properly quantify the deviation from randomness of the symbol distribution, especially for short sequences and low symbol frequency. We apply the measure to the problem of keyword detection in written texts and to study amino acid clustering in protein sequences. In texts, we show how the results improve with respect to previous methods when short texts are analyzed. In proteins, which are typically short, we show how the measure quantifies unambiguously the amino acid clustering and characterize its spatial distribution.
Applied probability and stochastic processes
Sumita, Ushio
1999-01-01
Applied Probability and Stochastic Processes is an edited work written in honor of Julien Keilson. This volume has attracted a host of scholars in applied probability, who have made major contributions to the field, and have written survey and state-of-the-art papers on a variety of applied probability topics, including, but not limited to: perturbation method, time reversible Markov chains, Poisson processes, Brownian techniques, Bayesian probability, optimal quality control, Markov decision processes, random matrices, queueing theory and a variety of applications of stochastic processes. The book has a mixture of theoretical, algorithmic, and application chapters providing examples of the cutting-edge work that Professor Keilson has done or influenced over the course of his highly-productive and energetic career in applied probability and stochastic processes. The book will be of interest to academic researchers, students, and industrial practitioners who seek to use the mathematics of applied probability i...
Chaibub Neto, Elias
2016-11-01
Clinical trials traditionally employ blinding as a design mechanism to reduce the influence of placebo effects. In practice, however, it can be difficult or impossible to blind study participants and unblinded trials are common in medical research. Here we show how instrumental variables can be used to quantify and disentangle treatment and placebo effects in randomized clinical trials comparing control and active treatments in the presence of confounders. The key idea is to use randomization to separately manipulate treatment assignment and psychological encouragement conversations/interactions that increase the participants’ desire for improved symptoms. The proposed approach is able to improve the estimation of treatment effects in blinded studies and, most importantly, opens the doors to account for placebo effects in unblinded trials.
Probability Aggregates in Probability Answer Set Programming
Saad, Emad
2013-01-01
Probability answer set programming is a declarative programming that has been shown effective for representing and reasoning about a variety of probability reasoning tasks. However, the lack of probability aggregates, e.g. {\\em expected values}, in the language of disjunctive hybrid probability logic programs (DHPP) disallows the natural and concise representation of many interesting problems. In this paper, we extend DHPP to allow arbitrary probability aggregates. We introduce two types of p...
Energy Technology Data Exchange (ETDEWEB)
Stotland, Alexander; Peer, Tal; Cohen, Doron [Department of Physics, Ben-Gurion University, Beer-Sheva 84005 (Israel); Budoyo, Rangga; Kottos, Tsampikos [Department of Physics, Wesleyan University, Middletown, CT 06459 (United States)
2008-07-11
The calculation of the conductance of disordered rings requires a theory that goes beyond the Kubo-Drude formulation. Assuming 'mesoscopic' circumstances the analysis of the electro-driven transitions shows similarities with a percolation problem in energy space. We argue that the texture and the sparsity of the perturbation matrix dictate the value of the conductance, and study its dependence on the disorder strength, ranging from the ballistic to the Anderson localization regime. An improved sparse random matrix model is introduced to capture the essential ingredients of the problem, and leads to a generalized variable range hopping picture. (fast track communication)
A new mean estimator using auxiliary variables for randomized response models
Ozgul, Nilgun; Cingi, Hulya
2013-10-01
Randomized response models are commonly used in surveys dealing with sensitive questions such as abortion, alcoholism, sexual orientation, drug taking, annual income, tax evasion to ensure interviewee anonymity and reduce nonrespondents rates and biased responses. Starting from the pioneering work of Warner [7], many versions of RRM have been developed that can deal with quantitative responses. In this study, new mean estimator is suggested for RRM including quantitative responses. The mean square error is derived and a simulation study is performed to show the efficiency of the proposed estimator to other existing estimators in RRM.
Energy Technology Data Exchange (ETDEWEB)
Hernandez, R.; Miller, W.H.; Moore, C.B. (Department of Chemistry, University of California, and Chemical Sciences Division, Lawrence Berkeley Laboratory, Berkeley, California 94720 (United States)); Polik, W.F. (Department of Chemistry, Hope College, Holland, Michigan 49423 (United States))
1993-07-15
A previously developed random matrix/transition state theory (RM/TST) model for the probability distribution of state-specific unimolecular decay rates has been generalized to incorporate total angular momentum conservation and other dynamical symmetries. The model is made into a predictive theory by using a semiclassical method to determine the transmission probabilities of a nonseparable rovibrational Hamiltonian at the transition state. The overall theory gives a good description of the state-specific rates for the D[sub 2]CO[r arrow]D[sub 2]+CO unimolecular decay; in particular, it describes the dependence of the distribution of rates on total angular momentum [ital J]. Comparison of the experimental values with results of the RM/TST theory suggests that there is mixing among the rovibrational states.
DEFF Research Database (Denmark)
Krøigård, Thomas; Gaist, David; Otto, Marit
2014-01-01
. Peroneal nerve distal motor latency, motor conduction velocity, and compound motor action potential amplitude; sural nerve sensory action potential amplitude and sensory conduction velocity; and tibial nerve minimal F-wave latency were examined in 51 healthy subjects, aged 40 to 67 years. They were...... reexamined after 2 and 26 weeks. There was no change in the variables except for a minor decrease in sural nerve sensory action potential amplitude and a minor increase in tibial nerve minimal F-wave latency. Reproducibility was best for peroneal nerve distal motor latency and motor conduction velocity...
Directory of Open Access Journals (Sweden)
David Shilane
2013-01-01
Full Text Available The negative binomial distribution becomes highly skewed under extreme dispersion. Even at moderately large sample sizes, the sample mean exhibits a heavy right tail. The standard normal approximation often does not provide adequate inferences about the data's expected value in this setting. In previous work, we have examined alternative methods of generating confidence intervals for the expected value. These methods were based upon Gamma and Chi Square approximations or tail probability bounds such as Bernstein's inequality. We now propose growth estimators of the negative binomial mean. Under high dispersion, zero values are likely to be overrepresented in the data. A growth estimator constructs a normal-style confidence interval by effectively removing a small, predetermined number of zeros from the data. We propose growth estimators based upon multiplicative adjustments of the sample mean and direct removal of zeros from the sample. These methods do not require estimating the nuisance dispersion parameter. We will demonstrate that the growth estimators' confidence intervals provide improved coverage over a wide range of parameter values and asymptotically converge to the sample mean. Interestingly, the proposed methods succeed despite adding both bias and variance to the normal approximation.
Scaling Qualitative Probability
Burgin, Mark
2017-01-01
There are different approaches to qualitative probability, which includes subjective probability. We developed a representation of qualitative probability based on relational systems, which allows modeling uncertainty by probability structures and is more coherent than existing approaches. This setting makes it possible proving that any comparative probability is induced by some probability structure (Theorem 2.1), that classical probability is a probability structure (Theorem 2.2) and that i...
Probability distribution of the number of deceits in collective robotics
Murciano, Antonio; Zamora, Javier; Lopez-Sanchez, Jesus; Rodriguez-Santamaria, Emilia
2002-01-01
The benefit obtained by a selfish robot by cheating in a real multirobotic system can be represented by the random variable Xn,q: the number of cheating interactions needed before all the members in a cooperative team of robots, playing a TIT FOR TAT strategy, recognize the selfish robot. Stability of cooperation depends on the ratio between the benefit obtained by selfish and cooperative robots. In this paper, we establish the probability model for Xn,q. If the values...
Models for probability and statistical inference theory and applications
Stapleton, James H
2007-01-01
This concise, yet thorough, book is enhanced with simulations and graphs to build the intuition of readersModels for Probability and Statistical Inference was written over a five-year period and serves as a comprehensive treatment of the fundamentals of probability and statistical inference. With detailed theoretical coverage found throughout the book, readers acquire the fundamentals needed to advance to more specialized topics, such as sampling, linear models, design of experiments, statistical computing, survival analysis, and bootstrapping.Ideal as a textbook for a two-semester sequence on probability and statistical inference, early chapters provide coverage on probability and include discussions of: discrete models and random variables; discrete distributions including binomial, hypergeometric, geometric, and Poisson; continuous, normal, gamma, and conditional distributions; and limit theory. Since limit theory is usually the most difficult topic for readers to master, the author thoroughly discusses mo...
Probability with applications and R
Dobrow, Robert P
2013-01-01
An introduction to probability at the undergraduate level Chance and randomness are encountered on a daily basis. Authored by a highly qualified professor in the field, Probability: With Applications and R delves into the theories and applications essential to obtaining a thorough understanding of probability. With real-life examples and thoughtful exercises from fields as diverse as biology, computer science, cryptology, ecology, public health, and sports, the book is accessible for a variety of readers. The book's emphasis on simulation through the use of the popular R software language c
Variability of Fiber Elastic Moduli in Composite Random Fiber Networks Makes the Network Softer
Ban, Ehsan; Picu, Catalin
2015-03-01
Athermal fiber networks are assemblies of beams or trusses. They have been used to model mechanics of fibrous materials such as biopolymer gels and synthetic nonwovens. Elasticity of these networks has been studied in terms of various microstructural parameters such as the stiffness of their constituent fibers. In this work we investigate the elasticity of composite fiber networks made from fibers with moduli sampled from a distribution function. We use finite elements simulations to study networks made by 3D Voronoi and Delaunay tessellations. The resulting data collapse to power laws showing that variability in fiber stiffness makes fiber networks softer. We also support the findings by analytical arguments. Finally, we apply these results to a network with curved fibers to explain the dependence of the network's modulus on the variation of its structural parameters.
Scruggs, Stacie; Mama, Scherezade K; Carmack, Cindy L; Douglas, Tommy; Diamond, Pamela; Basen-Engquist, Karen
2018-01-01
This study examined whether a physical activity intervention affects transtheoretical model (TTM) variables that facilitate exercise adoption in breast cancer survivors. Sixty sedentary breast cancer survivors were randomized to a 6-month lifestyle physical activity intervention or standard care. TTM variables that have been shown to facilitate exercise adoption and progress through the stages of change, including self-efficacy, decisional balance, and processes of change, were measured at baseline, 3 months, and 6 months. Differences in TTM variables between groups were tested using repeated measures analysis of variance. The intervention group had significantly higher self-efficacy ( F = 9.55, p = .003) and perceived significantly fewer cons of exercise ( F = 5.416, p = .025) at 3 and 6 months compared with the standard care group. Self-liberation, counterconditioning, and reinforcement management processes of change increased significantly from baseline to 6 months in the intervention group, and self-efficacy and reinforcement management were significantly associated with improvement in stage of change. The stage-based physical activity intervention increased use of select processes of change, improved self-efficacy, decreased perceptions of the cons of exercise, and helped participants advance in stage of change. These results point to the importance of using a theory-based approach in interventions to increase physical activity in cancer survivors.
Estimating Probabilities in Recommendation Systems
Sun, Mingxuan; Lebanon, Guy; Kidwell, Paul
2010-01-01
Recommendation systems are emerging as an important business application with significant economic impact. Currently popular systems include Amazon's book recommendations, Netflix's movie recommendations, and Pandora's music recommendations. In this paper we address the problem of estimating probabilities associated with recommendation system data using non-parametric kernel smoothing. In our estimation we interpret missing items as randomly censored observations and obtain efficient computat...
Bryan, Stephanie; Pinto Zipp, Genevieve; Parasher, Raju
2012-01-01
Physical inactivity is a serious issue for the American public. Because of conditions that result from inactivity, individuals incur close to $1 trillion USD in health-care costs, and approximately 250 000 premature deaths occur per year. Researchers have linked engaging in yoga to improved overall fitness, including improved muscular strength, muscular endurance, flexibility, and balance. Researchers have not yet investigated the impact of yoga on exercise adherence. The research team assessed the effects of 10 weeks of yoga classes held twice a week on exercise adherence in previously sedentary adults. The research team designed a randomized controlled pilot trial. The team collected data from the intervention (yoga) and control groups at baseline, midpoint, and posttest (posttest 1) and also collected data pertaining to exercise adherence for the yoga group at 5 weeks posttest (posttest 2). The pilot took place in a yoga studio in central New Jersey in the United States. The pretesting occurred at the yoga studio for all participants. Midpoint testing and posttesting occurred at the studio for the yoga group and by mail for the control group. Participants were 27 adults (mean age 51 y) who had been physically inactive for a period of at least 6 months prior to the study. Interventions The intervention group (yoga group) received hour-long hatha yoga classes that met twice a week for 10 weeks. The control group did not participate in classes during the research study; however, they were offered complimentary post research classes. Outcome Measures The study's primary outcome measure was exercise adherence as measured by the 7-day Physical Activity Recall. The secondary measures included (1) exercise self-efficacy as measured by the Multidimensional Self-Efficacy for Exercise Scale, (2) general well-being as measured by the General Well-Being Schedule, (3) exercise-group cohesion as measured by the Group Environment Questionnaire (GEQ), (4) acute feeling response
Contributions to quantum probability
Energy Technology Data Exchange (ETDEWEB)
Fritz, Tobias
2010-06-25
Chapter 1: On the existence of quantum representations for two dichotomic measurements. Under which conditions do outcome probabilities of measurements possess a quantum-mechanical model? This kind of problem is solved here for the case of two dichotomic von Neumann measurements which can be applied repeatedly to a quantum system with trivial dynamics. The solution uses methods from the theory of operator algebras and the theory of moment problems. The ensuing conditions reveal surprisingly simple relations between certain quantum-mechanical probabilities. It also shown that generally, none of these relations holds in general probabilistic models. This result might facilitate further experimental discrimination between quantum mechanics and other general probabilistic theories. Chapter 2: Possibilistic Physics. I try to outline a framework for fundamental physics where the concept of probability gets replaced by the concept of possibility. Whereas a probabilistic theory assigns a state-dependent probability value to each outcome of each measurement, a possibilistic theory merely assigns one of the state-dependent labels ''possible to occur'' or ''impossible to occur'' to each outcome of each measurement. It is argued that Spekkens' combinatorial toy theory of quantum mechanics is inconsistent in a probabilistic framework, but can be regarded as possibilistic. Then, I introduce the concept of possibilistic local hidden variable models and derive a class of possibilistic Bell inequalities which are violated for the possibilistic Popescu-Rohrlich boxes. The chapter ends with a philosophical discussion on possibilistic vs. probabilistic. It can be argued that, due to better falsifiability properties, a possibilistic theory has higher predictive power than a probabilistic one. Chapter 3: The quantum region for von Neumann measurements with postselection. It is determined under which conditions a probability distribution on a
Milner, Allison; Aitken, Zoe; Kavanagh, Anne; LaMontagne, Anthony D; Petrie, Dennis
2017-09-01
Status inconsistency refers to a discrepancy between the position a person holds in one domain of their social environment comparative to their position in another domain. For example, the experience of being overeducated for a job, or not using your skills in your job. We sought to assess the relationship between status inconsistency and mental health using 14 annual waves of cohort data. We used two approaches to measuring status inconsistency: 1) being overeducated for your job (objective measure); and b) not using your skills in your job (subjective measure). We implemented a number of methodological approaches to assess the robustness of our findings, including instrumental variable, random effects, and fixed effects analysis. Mental health was assessed using the Mental Health Inventory-5. The random effects analysis indicates that only the subjective measure of status inconsistency was associated with a slight decrease in mental health (β-1.57, 95% -1.78 to -1.36, p social determinants (such as work and education) and health outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Goldberg, Samuel
1960-01-01
Excellent basic text covers set theory, probability theory for finite sample spaces, binomial theorem, probability distributions, means, standard deviations, probability function of binomial distribution, more. Includes 360 problems with answers for half.
Koo, Reginald; Jones, Martin L.
2011-01-01
Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.
Quantum probability measures and tomographic probability densities
Amosov, GG; Man'ko, [No Value
2004-01-01
Using a simple relation of the Dirac delta-function to generalized the theta-function, the relationship between the tomographic probability approach and the quantum probability measure approach with the description of quantum states is discussed. The quantum state tomogram expressed in terms of the
Agreeing Probability Measures for Comparative Probability Structures
P.P. Wakker (Peter)
1981-01-01
textabstractIt is proved that fine and tight comparative probability structures (where the set of events is assumed to be an algebra, not necessarily a σ-algebra) have agreeing probability measures. Although this was often claimed in the literature, all proofs the author encountered are not valid
A practical overview on probability distributions.
Viti, Andrea; Terzi, Alberto; Bertolaccini, Luca
2015-03-01
Aim of this paper is a general definition of probability, of its main mathematical features and the features it presents under particular circumstances. The behavior of probability is linked to the features of the phenomenon we would predict. This link can be defined probability distribution. Given the characteristics of phenomena (that we can also define variables), there are defined probability distribution. For categorical (or discrete) variables, the probability can be described by a binomial or Poisson distribution in the majority of cases. For continuous variables, the probability can be described by the most important distribution in statistics, the normal distribution. Distributions of probability are briefly described together with some examples for their possible application.
Probability theory a foundational course
Pakshirajan, R P
2013-01-01
This book shares the dictum of J. L. Doob in treating Probability Theory as a branch of Measure Theory and establishes this relation early. Probability measures in product spaces are introduced right at the start by way of laying the ground work to later claim the existence of stochastic processes with prescribed finite dimensional distributions. Other topics analysed in the book include supports of probability measures, zero-one laws in product measure spaces, Erdos-Kac invariance principle, functional central limit theorem and functional law of the iterated logarithm for independent variables, Skorohod embedding, and the use of analytic functions of a complex variable in the study of geometric ergodicity in Markov chains. This book is offered as a text book for students pursuing graduate programs in Mathematics and or Statistics. The book aims to help the teacher present the theory with ease, and to help the student sustain his interest and joy in learning the subject.
Aggregate and Individual Replication Probability within an Explicit Model of the Research Process
Miller, Jeff; Schwarz, Wolf
2011-01-01
We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by…
WIENER-HOPF SOLVER WITH SMOOTH PROBABILITY DISTRIBUTIONS OF ITS COMPONENTS
Directory of Open Access Journals (Sweden)
Mr. Vladimir A. Smagin
2016-12-01
Full Text Available The Wiener – Hopf solver with smooth probability distributions of its component is presented. The method is based on hyper delta approximations of initial distributions. The use of Fourier series transformation and characteristic function allows working with the random variable method concentrated in transversal axis of absc.
Stationary algorithmic probability
National Research Council Canada - National Science Library
Müller, Markus
2010-01-01
...,sincetheiractualvaluesdependonthechoiceoftheuniversal referencecomputer.Inthispaper,weanalyzeanaturalapproachtoeliminatethismachine- dependence. Our method is to assign algorithmic probabilities to the different...
Randomized Item Response Theory Models
Fox, Gerardus J.A.
2005-01-01
The randomized response (RR) technique is often used to obtain answers on sensitive questions. A new method is developed to measure latent variables using the RR technique because direct questioning leads to biased results. Within the RR technique is the probability of the true response modeled by
Failure probability under parameter uncertainty.
Gerrard, R; Tsanakas, A
2011-05-01
In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.
Factual and cognitive probability
Chuaqui, Rolando
2012-01-01
This modification separates the two aspects of probability: probability as a part of physical theories (factual), and as a basis for statistical inference (cognitive). Factual probability is represented by probability structures as in the earlier papers, but now built independently of the language. Cognitive probability is interpreted as a form of "partial truth". The paper also contains a discussion of the Principle of Insufficient Reason and of Bayesian and classical statistical methods, in...
Evaluating probability forecasts
Lai, Tze Leung; Gross, Shulamith T.; Shen, David Bo
2011-01-01
Probability forecasts of events are routinely used in climate predictions, in forecasting default probabilities on bank loans or in estimating the probability of a patient's positive response to treatment. Scoring rules have long been used to assess the efficacy of the forecast probabilities after observing the occurrence, or nonoccurrence, of the predicted events. We develop herein a statistical theory for scoring rules and propose an alternative approach to the evaluation of probability for...
Probability, random processes, and ergodic properties
Gray, Robert M
2014-01-01
In this new edition of this classic text, much of the material has been rearranged and revised for pedagogical reasons. Many classic inequalities and proofs are now incorporated into the text, and many citations have been added.
[Suicide probability: an assessment terms of reasons for living, hopelessness and loneliness].
Durak Batigün, Ayşegül
2005-01-01
The purpose of the current study is to specify the reasons that hold people clinging to life; to investigate their suicide probability, and to describe the relationship of these variables with other variables such as hopelessness and loneliness, taking age, education and other socioeconomic variables into consideration. The subjects were 683, randomly chosen adolescents and adults between the ages of 15-65, residing in Ankara and Izmir. The assessment instruments were Reasons for Living Inventory, Suicide Probability Scale, UCLA Loneliness Scale, and Beck Hopelessness Scale. The data were analyzed using the SPSS for Windows 10.00. The analyses revealed that the group aged between 15-25 years reported fewer reasons for living, higher suicide probability, more hopelessness and loneliness, compared to older ages. Moreover, women reported more reasons for living, along with less loneliness and hopelessness. The regression analyses pointed out that age, education level, hopelessness, loneliness and reasons for living are predictive variables for suicide probability. As it was previously reported the current study also revealed that age is an important variable to be taken into consideration when suicide probability is being determined. In addition, was also found to be an important variable, at least for this Country. In parallel with the results of the studies in the relevant literature, reasons for living, hopelessness, and loneliness were found to be significant predictors of suicide probability.
Hammami, Muhammad M; Alvi, Syed N
2017-09-01
Background Average bioequivalence has been criticized for not adequately addressing individual variations. Importance of subjects' blinding in bioequivalence studies has not been well studied. We explored the extent of intra-subject pharmacokinetic variability and effect of drug-ingestion unawareness in subjects taking single caffeine product. Methods A single-dose randomized cross-over design was used to compare pharmacokinetics of 200 mg caffeine, described as caffeine (overt) or as placebo (covert). Maximum concentration (Cmax), Cmax first time (Tmax), area-under-the-concentration-time-curve, to last measured concentration (AUCT), extrapolated to infinity (AUCI), or to Tmax of overt caffeine (AUCOverttmax), and Cmax/AUCI were calculated blindly using standard non-compartmental method. Percentages of individual covert/overt ratios that are outside the ±25% range were determined. Covert-vs-overt effect on caffeine pharmacokinetics was evaluated by 90% confidence interval (CI) and 80.00-125.00% bioequivalence range. Results 32 healthy subjects (6% females, mean (SD) age 33.3 (7.2) year) participated in the study (28 analysed). Out of the 28 individual covert/overt ratios, 23% were outside the ±25% range for AUCT, 30% for AUCI, 20% for AUCOverttmax, 30% for Cmax, and 43% for Tmax. There was no significant covert-vs-overt difference in any of the pharmacokinetic parameters studied. Further, the 90% CIs for AUCT, AUCI, Cmax, AUCOverttmax, and Cmax/AUCI were all within the 80.00-125.00% bioequivalence range with mean absolute deviation of covert/overt ratios of 3.31%, 6.29%, 1.43%, 1.87%, and 5.19%, respectively. Conclusions Large intra-subject variability in main caffeine pharmacokinetic parameters was noted when comparing an oral caffeine product to itself. Subjects' blinding may not be important in average bioequivalence studies. © Georg Thieme Verlag KG Stuttgart · New York.
Probably not future prediction using probability and statistical inference
Dworsky, Lawrence N
2008-01-01
An engaging, entertaining, and informative introduction to probability and prediction in our everyday lives Although Probably Not deals with probability and statistics, it is not heavily mathematical and is not filled with complex derivations, proofs, and theoretical problem sets. This book unveils the world of statistics through questions such as what is known based upon the information at hand and what can be expected to happen. While learning essential concepts including "the confidence factor" and "random walks," readers will be entertained and intrigued as they move from chapter to chapter. Moreover, the author provides a foundation of basic principles to guide decision making in almost all facets of life including playing games, developing winning business strategies, and managing personal finances. Much of the book is organized around easy-to-follow examples that address common, everyday issues such as: How travel time is affected by congestion, driving speed, and traffic lights Why different gambling ...
Directory of Open Access Journals (Sweden)
Ahmet Kuzu
2014-01-01
Full Text Available This paper proposes two novel master-slave configurations that provide improvements in both control and communication aspects of teleoperation systems to achieve an overall improved performance in position control. The proposed novel master-slave configurations integrate modular control and communication approaches, consisting of a delay regulator to address problems related to variable network delay common to such systems, and a model tracking control that runs on the slave side for the compensation of uncertainties and model mismatch on the slave side. One of the configurations uses a sliding mode observer and the other one uses a modified Smith predictor scheme on the master side to ensure position transparency between the master and slave, while reference tracking of the slave is ensured by a proportional-differentiator type controller in both configurations. Experiments conducted for the networked position control of a single-link arm under system uncertainties and randomly varying network delays demonstrate significant performance improvements with both configurations over the past literature.
Directory of Open Access Journals (Sweden)
Joshi Meesha
2010-03-01
Full Text Available Abstract Background An earlier study showed that a week of yoga practice was useful in stress management after a natural calamity. Due to heavy rain and a rift on the banks of the Kosi river, in the state of Bihar in north India, there were floods with loss of life and property. A week of yoga practice was given to the survivors a month after the event and the effect was assessed. Methods Twenty-two volunteers (group average age ± S.D, 31.5 ± 7.5 years; all of them were males were randomly assigned to two groups, yoga and a non-yoga wait-list control group. The yoga group practiced yoga for an hour daily while the control group continued with their routine activities. Both groups' heart rate variability, breath rate, and four symptoms of emotional distress using visual analog scales, were assessed on the first and eighth day of the program. Results There was a significant decrease in sadness in the yoga group (p Conclusions A week of yoga can reduce feelings of sadness and possibly prevent an increase in anxiety in flood survivors a month after the calamity. Trial Registration Clinical Trials Registry of India: CTRI/2009/091/000285
Random Constraint Satisfaction Problems
Directory of Open Access Journals (Sweden)
Amin Coja-Oghlan
2009-11-01
Full Text Available Random instances of constraint satisfaction problems such as k-SAT provide challenging benchmarks. If there are m constraints over n variables there is typically a large range of densities r=m/n where solutions are known to exist with probability close to one due to non-constructive arguments. However, no algorithms are known to find solutions efficiently with a non-vanishing probability at even much lower densities. This fact appears to be related to a phase transition in the set of all solutions. The goal of this extended abstract is to provide a perspective on this phenomenon, and on the computational challenge that it poses.
Probability Analysis of a Quantum Computer
Einarsson, Göran
2003-01-01
The quantum computer algorithm by Peter Shor for factorization of integers is studied. The quantum nature of a QC makes its outcome random. The output probability distribution is investigated and the chances of a successful operation is determined
Directory of Open Access Journals (Sweden)
Gabriel Constantino Blain
2013-06-01
Full Text Available The extreme rainfall events have received especial focus in the climate literature due to their potential for causing soil erosion, runoff and, soil water saturation. Thus, the aims of the study were (i to evaluate the presence of trends, temporal persistence and periodical components in the seasonal maximum daily rainfall values (Preabs obtained from the weather station of Campinas, State of São Paulo, Brazil (1890-2012 and, (ii to verify the possibility of using the General Extreme Value distribution (GEV for modeling the probability of occurrence of these extreme rainfall events. The spectral analysis carried out on the time-frequency domain has shown no significant periodicity associated with the variance peaks of the time series under analysis. Based on parametric and non parametric methods and also considering the significance levels usually adopted in the scientific literature (10 and 5%, the Preabs values showed no significant climate trend. The results obtained from qualitative and quantitative goodness-of-fit procedures pointed out that a stationary-GEV model, with time-independent parameters, may be used to describe the probabilistic structure of this meteorological variable.
Efficient probability sequence
Regnier, Eva
2014-01-01
A probability sequence is an ordered set of probability forecasts for the same event. Although single-period probabilistic forecasts and methods for evaluating them have been extensively analyzed, we are not aware of any prior work on evaluating probability sequences. This paper proposes an efficiency condition for probability sequences and shows properties of efficient forecasting systems, including memorylessness and increasing discrimination. These results suggest tests for efficiency and ...
Efficient probability sequences
Regnier, Eva
2014-01-01
DRMI working paper A probability sequence is an ordered set of probability forecasts for the same event. Although single-period probabilistic forecasts and methods for evaluating them have been extensively analyzed, we are not aware of any prior work on evaluating probability sequences. This paper proposes an efficiency condition for probability sequences and shows properties of efficiency forecasting systems, including memorylessness and increasing discrimination. These res...
Philosophical theories of probability
Gillies, Donald
2000-01-01
The Twentieth Century has seen a dramatic rise in the use of probability and statistics in almost all fields of research. This has stimulated many new philosophical ideas on probability. Philosophical Theories of Probability is the first book to present a clear, comprehensive and systematic account of these various theories and to explain how they relate to one another. Gillies also offers a distinctive version of the propensity theory of probability, and the intersubjective interpretation, which develops the subjective theory.
Pedro, Rafael E; Guariglia, Débora A; Okuno, Nilo M; Deminice, Rafael; Peres, Sidney B; Moraes, Solange M F
2016-12-01
Pedro, RE, Guariglia, DA, Okuno, NM, Deminice, R, Peres, SB, and Moraes, SMF. Effects of 16 weeks of concurrent training on resting heart rate variability and cardiorespiratory fitness in people living with HIV/AIDS using antiretroviral therapy: a randomized clinical trial. J Strength Cond Res 30(12): 3494-3502, 2016-The study evaluated the effects of concurrent training on resting heart rate variability (HRVrest) and cardiorespiratory fitness in people living with HIV/AIDS undergoing antiretroviral therapy (ART). Fifty-eight participants were randomized into 2 groups (control and training group); however, only 33 were analyzed. The variables studied were HRVrest indices, submaximal values of oxygen uptake (V[Combining Dot Above]O2sub) and heart rate (HR5min), peak speed (Vpeak), and peak oxygen uptake (V[Combining Dot Above]O2peak). The training group performed concurrent training (15-20 minutes of aerobic exercise plus 40 minutes of resistance exercise), 3 times per week, for 16 weeks. Posttraining V[Combining Dot Above]O2peak and Vpeak increased, and HR5min decreased. Resting heart rate variability indices did not present statistical differences posttraining; however, the magnitude-based inferences demonstrated a "possibly positive effect" for high frequency (HF) and low frequency (LF) plus high frequency (LF + HF) and a "likely positive effect" for R-Rmean posttraining. In conclusion, concurrent training was effective at improving cardiorespiratory fitness and endurance performance. Moreover, it led to probably a positive effect on HF and a likely positive effect on R-Rmean in people living with HIV/AIDS undergoing ART.
Directory of Open Access Journals (Sweden)
Qinghui Du
2014-01-01
Full Text Available We consider semi-implicit Euler methods for stochastic age-dependent capital system with variable delays and random jump magnitudes, and investigate the convergence of the numerical approximation. It is proved that the numerical approximate solutions converge to the analytical solutions in the mean-square sense under given conditions.
Estimating Subjective Probabilities
DEFF Research Database (Denmark)
Andersen, Steffen; Fountain, John; Harrison, Glenn W.
Subjective probabilities play a central role in many economic decisions, and act as an immediate confound of inferences about behavior, unless controlled for. Several procedures to recover subjective probabilities have been proposed, but in order to recover the correct latent probability one must...
Estimating Subjective Probabilities
DEFF Research Database (Denmark)
Andersen, Steffen; Fountain, John; Harrison, Glenn W.
2014-01-01
Subjective probabilities play a central role in many economic decisions and act as an immediate confound of inferences about behavior, unless controlled for. Several procedures to recover subjective probabilities have been proposed, but in order to recover the correct latent probability one must ...
Benci, Vieri; Horsten, Leon; Wenmackers, Sylvia
We propose an alternative approach to probability theory closely related to the framework of numerosity theory: non-Archimedean probability (NAP). In our approach, unlike in classical probability theory, all subsets of an infinite sample space are measurable and only the empty set gets assigned
Interpretations of probability
Khrennikov, Andrei
2009-01-01
This is the first fundamental book devoted to non-Kolmogorov probability models. It provides a mathematical theory of negative probabilities, with numerous applications to quantum physics, information theory, complexity, biology and psychology. The book also presents an interesting model of cognitive information reality with flows of information probabilities, describing the process of thinking, social, and psychological phenomena.
Simulations of Probabilities for Quantum Computing
Zak, M.
1996-01-01
It has been demonstrated that classical probabilities, and in particular, probabilistic Turing machine, can be simulated by combining chaos and non-LIpschitz dynamics, without utilization of any man-made devices (such as random number generators). Self-organizing properties of systems coupling simulated and calculated probabilities and their link to quantum computations are discussed.
Poisson statistics of PageRank probabilities of Twitter and Wikipedia networks
Frahm, Klaus M.; Shepelyansky, Dima L.
2014-04-01
We use the methods of quantum chaos and Random Matrix Theory for analysis of statistical fluctuations of PageRank probabilities in directed networks. In this approach the effective energy levels are given by a logarithm of PageRank probability at a given node. After the standard energy level unfolding procedure we establish that the nearest spacing distribution of PageRank probabilities is described by the Poisson law typical for integrable quantum systems. Our studies are done for the Twitter network and three networks of Wikipedia editions in English, French and German. We argue that due to absence of level repulsion the PageRank order of nearby nodes can be easily interchanged. The obtained Poisson law implies that the nearby PageRank probabilities fluctuate as random independent variables.
Oxygen boundary crossing probabilities.
Busch, N A; Silver, I A
1987-01-01
The probability that an oxygen particle will reach a time dependent boundary is required in oxygen transport studies involving solution methods based on probability considerations. A Volterra integral equation is presented, the solution of which gives directly the boundary crossing probability density function. The boundary crossing probability is the probability that the oxygen particle will reach a boundary within a specified time interval. When the motion of the oxygen particle may be described as strongly Markovian, then the Volterra integral equation can be rewritten as a generalized Abel equation, the solution of which has been widely studied.
Childers, Timothy
2013-01-01
Probability is increasingly important for our understanding of the world. What is probability? How do we model it, and how do we use it? Timothy Childers presents a lively introduction to the foundations of probability and to philosophical issues it raises. He keeps technicalities to a minimum, and assumes no prior knowledge of the subject. He explains the main interpretations of probability-frequentist, propensity, classical, Bayesian, and objective Bayesian-and uses stimulatingexamples to bring the subject to life. All students of philosophy will benefit from an understanding of probability,
Lin, Shu-Ling; Huang, Ching-Ya; Shiu, Shau-Ping; Yeh, Shu-Hui
2015-08-01
Mental health professionals experiencing work-related stress may experience burn out, leading to a negative impact on their organization and patients. The aim of this study was to examine the effects of yoga classes on work-related stress, stress adaptation, and autonomic nerve activity among mental health professionals. A randomized controlled trial was used, which compared the outcomes between the experimental (e.g., yoga program) and the control groups (e.g., no yoga exercise) for 12 weeks. Work-related stress and stress adaptation were assessed before and after the program. Heart rate variability (HRV) was measured at baseline, midpoint through the weekly yoga classes (6 weeks), and postintervention (after 12 weeks of yoga classes). The results showed that the mental health professionals in the yoga group experienced a significant reduction in work-related stress (t = -6.225, p yoga and control groups, we found the yoga group significantly decreased work-related stress (t = -3.216, p = .002), but there was no significant change in stress adaptation (p = .084). While controlling for the pretest scores of work-related stress, participants in yoga, but not the control group, revealed a significant increase in autonomic nerve activity at midpoint (6 weeks) test (t = -2.799, p = .007), and at posttest (12 weeks; t = -2.099, p = .040). Because mental health professionals experienced a reduction in work-related stress and an increase in autonomic nerve activity in a weekly yoga program for 12 weeks, clinicians, administrators, and educators should offer yoga classes as a strategy to help health professionals reduce their work-related stress and balance autonomic nerve activities. © 2015 The Authors. Worldviews on Evidence-Based Nursing published by Wiley Periodicals, Inc. on behalf of Society for Worldviews on Evidence-Based Nursing.
Normal probability plots with confidence.
Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang
2015-01-01
Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
In All Probability, Probability is not All
Helman, Danny
2004-01-01
The national lottery is often portrayed as a game of pure chance with no room for strategy. This misperception seems to stem from the application of probability instead of expectancy considerations, and can be utilized to introduce the statistical concept of expectation.
Probability theory a concise course
Rozanov, Y A
1977-01-01
This clear exposition begins with basic concepts and moves on to combination of events, dependent events and random variables, Bernoulli trials and the De Moivre-Laplace theorem, a detailed treatment of Markov chains, continuous Markov processes, and more. Includes 150 problems, many with answers. Indispensable to mathematicians and natural scientists alike.
Chan, Wai Sze; Williams, Jacob; Dautovich, Natalie D; McNamara, Joseph P H; Stripling, Ashley; Dzierzewski, Joseph M; Berry, Richard B; McCoy, Karin J M; McCrae, Christina S
2017-11-15
Sleep variability is a clinically significant variable in understanding and treating insomnia in older adults. The current study examined changes in sleep variability in the course of brief behavioral therapy for insomnia (BBT-I) in older adults who had chronic insomnia. Additionally, the current study examined the mediating mechanisms underlying reductions of sleep variability and the moderating effects of baseline sleep variability on treatment responsiveness. Sixty-two elderly participants were randomly assigned to either BBT-I or self-monitoring and attention control (SMAC). Sleep was assessed by sleep diaries and actigraphy from baseline to posttreatment and at 3-month follow-up. Mixed models were used to examine changes in sleep variability (within-person standard deviations of weekly sleep parameters) and the hypothesized mediation and moderation effects. Variabilities in sleep diary-assessed sleep onset latency (SOL) and actigraphy-assessed total sleep time (TST) significantly decreased in BBT-I compared to SMAC (Pseudo R(2) = .12, .27; P = .018, .008). These effects were mediated by reductions in bedtime and wake time variability and time in bed. Significant time × group × baseline sleep variability interactions on sleep outcomes indicated that participants who had higher baseline sleep variability were more responsive to BBT-I; their actigraphy-assessed TST, SOL, and sleep efficiency improved to a greater degree (Pseudo R(2) = .15 to .66; P sleep variability in older adults who have chronic insomnia. Increased consistency in bedtime and wake time and decreased time in bed mediate reductions of sleep variability. Baseline sleep variability may serve as a marker of high treatment responsiveness to BBT-I. ClinicalTrials.gov, Identifier: NCT02967185.
Probability density of quantum expectation values
Energy Technology Data Exchange (ETDEWEB)
Campos Venuti, L., E-mail: lcamposv@usc.edu; Zanardi, P.
2013-10-30
We consider the quantum expectation value A=〈ψ|A|ψ〉 of an observable A over the state |ψ〉. We derive the exact probability distribution of A seen as a random variable when |ψ〉 varies over the set of all pure states equipped with the Haar-induced measure. To illustrate our results we compare the exact predictions for few concrete examples with the concentration bounds obtained using Levy's lemma. We also comment on the relevance of the central limit theorem and finally draw some results on an alternative statistical mechanics based on the uniform measure on the energy shell. - Highlights: • We compute the probability distribution of quantum expectation values for states sampled uniformly. • As a special case we consider in some detail the degenerate case where A is a one-dimensional projector. • We compare the concentration results obtained using Levy's lemma with the exact values obtained using our exact formulae. • We comment on the possibility of a Central Limit Theorem and show approach to Gaussian for a few physical operators. • Some implications of our results for the so-called “Quantum Microcanonical Equilibration” (Refs. [5–9]) are derived.
Random functions and turbulence
Panchev, S
1971-01-01
International Series of Monographs in Natural Philosophy, Volume 32: Random Functions and Turbulence focuses on the use of random functions as mathematical methods. The manuscript first offers information on the elements of the theory of random functions. Topics include determination of statistical moments by characteristic functions; functional transformations of random variables; multidimensional random variables with spherical symmetry; and random variables and distribution functions. The book then discusses random processes and random fields, including stationarity and ergodicity of random
Spieth, Peter M.; Güldner, Andreas; Uhlig, Christopher; Bluth, Thomas; Kiss, Thomas; Schultz, Marcus J.; Pelosi, Paolo; Koch, Thea; Gama de Abreu, Marcelo
2014-01-01
General anesthesia usually requires mechanical ventilation, which is traditionally accomplished with constant tidal volumes in volume- or pressure-controlled modes. Experimental studies suggest that the use of variable tidal volumes (variable ventilation) recruits lung tissue, improves pulmonary
Spieth, Peter M; Güldner, Andreas; Uhlig, Christopher; Bluth, Thomas; Kiss, Thomas; Schultz, Marcus J.; Pelosi, Paolo; Koch, Thea; Gama de Abreu, Marcelo
2014-01-01
Background General anesthesia usually requires mechanical ventilation, which is traditionally accomplished with constant tidal volumes in volume- or pressure-controlled modes. Experimental studies suggest that the use of variable tidal volumes (variable ventilation) recruits lung tissue, improves pulmonary function and reduces systemic inflammatory response. However, it is currently not known whether patients undergoing open abdominal surgery might benefit from intraoperative variable ventila...
Ruin probabilities in models with a Markov chain dependence structure
Constantinescu, Corina; Kortschak, Dominik; Maume-Deschamps, Véronique
2013-01-01
International audience; In this paper we derive explicit expressions for the probability of ruin in a renewal risk model with dependence described-by/incorporated-in the real-valued random variable Zk = −cτk + Xk , namely the loss between the (k − 1)–th and the k–th claim. Here c represents the constant premium rate, τk the inter-arrival time between the (k − 1)–th and the k–th claim and Xk is the size of the k–th claim. The dependence structure among (Zk )k>0 is given/driven by a Markov chai...
Directory of Open Access Journals (Sweden)
Laktineh Imad
2010-04-01
Full Text Available This ourse constitutes a brief introduction to probability applications in high energy physis. First the mathematical tools related to the diferent probability conepts are introduced. The probability distributions which are commonly used in high energy physics and their characteristics are then shown and commented. The central limit theorem and its consequences are analysed. Finally some numerical methods used to produce diferent kinds of probability distribution are presented. The full article (17 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.
Florescu, Ionut
2013-01-01
THE COMPLETE COLLECTION NECESSARY FOR A CONCRETE UNDERSTANDING OF PROBABILITY Written in a clear, accessible, and comprehensive manner, the Handbook of Probability presents the fundamentals of probability with an emphasis on the balance of theory, application, and methodology. Utilizing basic examples throughout, the handbook expertly transitions between concepts and practice to allow readers an inclusive introduction to the field of probability. The book provides a useful format with self-contained chapters, allowing the reader easy and quick reference. Each chapter includes an introductio
Ash, Robert B; Lukacs, E
1972-01-01
Real Analysis and Probability provides the background in real analysis needed for the study of probability. Topics covered range from measure and integration theory to functional analysis and basic concepts of probability. The interplay between measure theory and topology is also discussed, along with conditional probability and expectation, the central limit theorem, and strong laws of large numbers with respect to martingale theory.Comprised of eight chapters, this volume begins with an overview of the basic concepts of the theory of measure and integration, followed by a presentation of var
Directory of Open Access Journals (Sweden)
Chang Dennis
2011-07-01
Full Text Available Abstract Background Chronic work-related stress is a significant and independent risk factor for cardiovascular and metabolic diseases and associated mortality, particularly when compounded by a sedentary work environment. Heart rate variability (HRV provides an estimate of parasympathetic and sympathetic autonomic control, and can serve as a marker of physiological stress. Hatha yoga is a physically demanding practice that can help to reduce stress; however, time constraints incurred by work and family life may limit participation. The purpose of the present study is to determine if a 10-week, worksite-based yoga program delivered during lunch hour can improve resting HRV and related physical and psychological parameters in sedentary office workers. Methods and design This is a parallel-arm RCT that will compare the outcomes of participants assigned to the experimental treatment group (yoga to those assigned to a no-treatment control group. Participants randomized to the experimental condition will engage in a 10-week yoga program delivered at their place of work. The yoga sessions will be group-based, prescribed three times per week during lunch hour, and will be led by an experienced yoga instructor. The program will involve teaching beginner students safely and progressively over 10 weeks a yoga sequence that incorporates asanas (poses and postures, vinyasa (exercises, pranayama (breathing control and meditation. The primary outcome of this study is the high frequency (HF spectral power component of HRV (measured in absolute units; i.e. ms2, a measure of parasympathetic autonomic control. Secondary outcomes include additional frequency and time domains of HRV, and measures of physical functioning and psychological health status. Measures will be collected prior to and following the intervention period, and at 6 months follow-up to determine the effect of intervention withdrawal. Discussion This study will determine the effect of worksite
2013-01-01
Background Chronic work-related stress is an independent risk factor for cardiometabolic diseases and associated mortality, particularly when compounded by a sedentary work environment. The purpose of this study was to determine if an office worksite-based hatha yoga program could improve physiological stress, evaluated via heart rate variability (HRV), and associated health-related outcomes in a cohort of office workers. Methods Thirty-seven adults employed in university-based office positions were randomized upon the completion of baseline testing to an experimental or control group. The experimental group completed a 10-week yoga program prescribed three sessions per week during lunch hour (50 min per session). An experienced instructor led the sessions, which emphasized asanas (postures) and vinyasa (exercises). The primary outcome was the high frequency (HF) power component of HRV. Secondary outcomes included additional HRV parameters, musculoskeletal fitness (i.e. push-up, side-bridge, and sit & reach tests) and psychological indices (i.e. state and trait anxiety, quality of life and job satisfaction). Results All measures of HRV failed to change in the experimental group versus the control group, except that the experimental group significantly increased LF:HF (p = 0.04) and reduced pNN50 (p = 0.04) versus control, contrary to our hypotheses. Flexibility, evaluated via sit & reach test increased in the experimental group versus the control group (p yoga sessions (n = 11) to control (n = 19) yielded the same findings, except that the high adherers also reduced state anxiety (p = 0.02) and RMSSD (p = 0.05), and tended to improve the push-up test (p = 0.07) versus control. Conclusions A 10-week hatha yoga intervention delivered at the office worksite during lunch hour did not improve HF power or other HRV parameters. However, improvements in flexibility, state anxiety and musculoskeletal fitness were noted with high adherence
Assessing magnitude probability distribution through physics-based rupture scenarios
Hok, Sébastien; Durand, Virginie; Bernard, Pascal; Scotti, Oona
2016-04-01
When faced with complex network of faults in a seismic hazard assessment study, the first question raised is to what extent the fault network is connected and what is the probability that an earthquake ruptures simultaneously a series of neighboring segments. Physics-based dynamic rupture models can provide useful insight as to which rupture scenario is most probable, provided that an exhaustive exploration of the variability of the input parameters necessary for the dynamic rupture modeling is accounted for. Given the random nature of some parameters (e.g. hypocenter location) and the limitation of our knowledge, we used a logic-tree approach in order to build the different scenarios and to be able to associate them with a probability. The methodology is applied to the three main faults located along the southern coast of the West Corinth rift. Our logic tree takes into account different hypothesis for: fault geometry, location of hypocenter, seismic cycle position, and fracture energy on the fault plane. The variability of these parameters is discussed, and the different values tested are weighted accordingly. 64 scenarios resulting from 64 parameter combinations were included. Sensitivity studies were done to illustrate which parameters control the variability of the results. Given the weight of the input parameters, we evaluated the probability to obtain a full network break to be 15 %, while single segment rupture represents 50 % of the scenarios. These rupture scenario probability distribution along the three faults of the West Corinth rift fault network can then be used as input to a seismic hazard calculation.
A Novel Approach to Probability
Kafri, Oded
2016-01-01
When P indistinguishable balls are randomly distributed among L distinguishable boxes, and considering the dense system in which P much greater than L, our natural intuition tells us that the box with the average number of balls has the highest probability and that none of boxes are empty; however in reality, the probability of the empty box is always the highest. This fact is with contradistinction to sparse system in which the number of balls is smaller than the number of boxes (i.e. energy distribution in gas) in which the average value has the highest probability. Here we show that when we postulate the requirement that all possible configurations of balls in the boxes have equal probabilities, a realistic "long tail" distribution is obtained. This formalism when applied for sparse systems converges to distributions in which the average is preferred. We calculate some of the distributions resulted from this postulate and obtain most of the known distributions in nature, namely, Zipf law, Benford law, part...
Adolescents' misinterpretation of health risk probability expressions.
Cohn, L D; Schydlower, M; Foley, J; Copeland, R L
1995-05-01
To determine if differences exist between adolescents and physicians in their numerical translation of 13 commonly used probability expressions (eg, possibly, might). Cross-sectional. Adolescent medicine and pediatric orthopedic outpatient units. 150 adolescents and 51 pediatricians, pediatric orthopedic surgeons, and nurses. Numerical ratings of the degree of certainty implied by 13 probability expressions (eg, possibly, probably). Adolescents were significantly more likely than physicians to display comprehension errors, reversing or equating the meaning of terms such as probably/possibly and likely/possibly. Numerical expressions of uncertainty (eg, 30% chance) elicited less variability in ratings than lexical expressions of uncertainty (eg, possibly). Physicians should avoid using probability expressions such as probably, possibly, and likely when communicating health risks to children and adolescents. Numerical expressions of uncertainty may be more effective for conveying the likelihood of an illness than lexical expressions of uncertainty (eg, probably).
Difficulties related to Probabilities
Rosinger, Elemer Elad
2010-01-01
Probability theory is often used as it would have the same ontological status with, for instance, Euclidean Geometry or Peano Arithmetics. In this regard, several highly questionable aspects of probability theory are mentioned which have earlier been presented in two arxiv papers.
Dynamic update with probabilities
Van Benthem, Johan; Gerbrandy, Jelle; Kooi, Barteld
2009-01-01
Current dynamic-epistemic logics model different types of information change in multi-agent scenarios. We generalize these logics to a probabilistic setting, obtaining a calculus for multi-agent update with three natural slots: prior probability on states, occurrence probabilities in the relevant
Elements of quantum probability
Kummerer, B.; Maassen, H.
1996-01-01
This is an introductory article presenting some basic ideas of quantum probability. From a discussion of simple experiments with polarized light and a card game we deduce the necessity of extending the body of classical probability theory. For a class of systems, containing classical systems with
Freund, John E
1993-01-01
Thorough, lucid coverage of permutations and factorials, probabilities and odds, frequency interpretation, mathematical expectation, decision making, postulates of probability, rule of elimination, binomial distribution, geometric distribution, standard deviation, law of large numbers, and much more. Exercises with some solutions. Summary. Bibliography. Includes 42 black-and-white illustrations. 1973 edition.
Probability on compact Lie groups
Applebaum, David
2014-01-01
Probability theory on compact Lie groups deals with the interaction between “chance” and “symmetry,” a beautiful area of mathematics of great interest in its own sake but which is now also finding increasing applications in statistics and engineering (particularly with respect to signal processing). The author gives a comprehensive introduction to some of the principle areas of study, with an emphasis on applicability. The most important topics presented are: the study of measures via the non-commutative Fourier transform, existence and regularity of densities, properties of random walks and convolution semigroups of measures, and the statistical problem of deconvolution. The emphasis on compact (rather than general) Lie groups helps readers to get acquainted with what is widely seen as a difficult field but which is also justified by the wealth of interesting results at this level and the importance of these groups for applications. The book is primarily aimed at researchers working in probability, s...
Rocchi, Paolo
2014-01-01
The problem of probability interpretation was long overlooked before exploding in the 20th century, when the frequentist and subjectivist schools formalized two conflicting conceptions of probability. Beyond the radical followers of the two schools, a circle of pluralist thinkers tends to reconcile the opposing concepts. The author uses two theorems in order to prove that the various interpretations of probability do not come into opposition and can be used in different contexts. The goal here is to clarify the multifold nature of probability by means of a purely mathematical approach and to show how philosophical arguments can only serve to deepen actual intellectual contrasts. The book can be considered as one of the most important contributions in the analysis of probability interpretation in the last 10-15 years.
Flood hazard probability mapping method
Kalantari, Zahra; Lyon, Steve; Folkeson, Lennart
2015-04-01
In Sweden, spatially explicit approaches have been applied in various disciplines such as landslide modelling based on soil type data and flood risk modelling for large rivers. Regarding flood mapping, most previous studies have focused on complex hydrological modelling on a small scale whereas just a few studies have used a robust GIS-based approach integrating most physical catchment descriptor (PCD) aspects on a larger scale. The aim of the present study was to develop methodology for predicting the spatial probability of flooding on a general large scale. Factors such as topography, land use, soil data and other PCDs were analysed in terms of their relative importance for flood generation. The specific objective was to test the methodology using statistical methods to identify factors having a significant role on controlling flooding. A second objective was to generate an index quantifying flood probability value for each cell, based on different weighted factors, in order to provide a more accurate analysis of potential high flood hazards than can be obtained using just a single variable. The ability of indicator covariance to capture flooding probability was determined for different watersheds in central Sweden. Using data from this initial investigation, a method to subtract spatial data for multiple catchments and to produce soft data for statistical analysis was developed. It allowed flood probability to be predicted from spatially sparse data without compromising the significant hydrological features on the landscape. By using PCD data, realistic representations of high probability flood regions was made, despite the magnitude of rain events. This in turn allowed objective quantification of the probability of floods at the field scale for future model development and watershed management.
Lloret Cabot, M.; Hicks, M.A.; Van den Eijnden, A.P.
2012-01-01
Spatial variability of soil properties is inherent in soil deposits, whether as a result of natural geological processes or engineering construction. It is therefore important to account for soil variability in geotechnical design in order to represent more realistically a soil’s in situ state. This
Directory of Open Access Journals (Sweden)
Shuiqing Yu
2013-01-01
Full Text Available This paper investigates the dynamic output feedback control for nonlinear networked control systems with both random packet dropout and random delay. Random packet dropout and random delay are modeled as two independent random variables. An observer-based dynamic output feedback controller is designed based upon the Lyapunov theory. The quantitative relationship of the dropout rate, transition probability matrix, and nonlinear level is derived by solving a set of linear matrix inequalities. Finally, an example is presented to illustrate the effectiveness of the proposed method.
Billingsley, Patrick
2012-01-01
Praise for the Third Edition "It is, as far as I'm concerned, among the best books in math ever written....if you are a mathematician and want to have the top reference in probability, this is it." (Amazon.com, January 2006) A complete and comprehensive classic in probability and measure theory Probability and Measure, Anniversary Edition by Patrick Billingsley celebrates the achievements and advancements that have made this book a classic in its field for the past 35 years. Now re-issued in a new style and format, but with the reliable content that the third edition was revered for, this
Hartmann, Stephan
2011-01-01
Many results of modern physics--those of quantum mechanics, for instance--come in a probabilistic guise. But what do probabilistic statements in physics mean? Are probabilities matters of objective fact and part of the furniture of the world, as objectivists think? Or do they only express ignorance or belief, as Bayesians suggest? And how are probabilistic hypotheses justified and supported by empirical evidence? Finally, what does the probabilistic nature of physics imply for our understanding of the world? This volume is the first to provide a philosophical appraisal of probabilities in all of physics. Its main aim is to make sense of probabilistic statements as they occur in the various physical theories and models and to provide a plausible epistemology and metaphysics of probabilities. The essays collected here consider statistical physics, probabilistic modelling, and quantum mechanics, and critically assess the merits and disadvantages of objectivist and subjectivist views of probabilities in these fie...
Hemmo, Meir
2012-01-01
What is the role and meaning of probability in physical theory, in particular in two of the most successful theories of our age, quantum physics and statistical mechanics? Laws once conceived as universal and deterministic, such as Newton‘s laws of motion, or the second law of thermodynamics, are replaced in these theories by inherently probabilistic laws. This collection of essays by some of the world‘s foremost experts presents an in-depth analysis of the meaning of probability in contemporary physics. Among the questions addressed are: How are probabilities defined? Are they objective or subjective? What is their explanatory value? What are the differences between quantum and classical probabilities? The result is an informative and thought-provoking book for the scientifically inquisitive.
Shorack, Galen R
2017-01-01
This 2nd edition textbook offers a rigorous introduction to measure theoretic probability with particular attention to topics of interest to mathematical statisticians—a textbook for courses in probability for students in mathematical statistics. It is recommended to anyone interested in the probability underlying modern statistics, providing a solid grounding in the probabilistic tools and techniques necessary to do theoretical research in statistics. For the teaching of probability theory to post graduate statistics students, this is one of the most attractive books available. Of particular interest is a presentation of the major central limit theorems via Stein's method either prior to or alternative to a characteristic function presentation. Additionally, there is considerable emphasis placed on the quantile function as well as the distribution function. The bootstrap and trimming are both presented. Martingale coverage includes coverage of censored data martingales. The text includes measure theoretic...
Probability and Bayesian statistics
1987-01-01
This book contains selected and refereed contributions to the "Inter national Symposium on Probability and Bayesian Statistics" which was orga nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...
Kumar, Keshav
2017-06-23
Multivariate curve resolution alternating least square (MCR-ALS) analysis is the most commonly used curve resolution technique. The MCR-ALS model is fitted using the alternate least square (ALS) algorithm that needs initialisation of either contribution profiles or spectral profiles of each of the factor. The contribution profiles can be initialised using the evolve factor analysis; however, in principle, this approach requires that data must belong to the sequential process. The initialisation of the spectral profiles are usually carried out using the pure variable approach such as SIMPLISMA algorithm, this approach demands that each factor must have the pure variables in the data sets. Despite these limitations, the existing approaches have been quite a successful for initiating the MCR-ALS analysis. However, the present work proposes an alternate approach for the initialisation of the spectral variables by generating the random variables in the limits spanned by the maxima and minima of each spectral variable of the data set. The proposed approach does not require that there must be pure variables for each component of the multicomponent system or the concentration direction must follow the sequential process. The proposed approach is successfully validated using the excitation-emission matrix fluorescence data sets acquired for certain fluorophores with significant spectral overlap. The calculated contribution and spectral profiles of these fluorophores are found to correlate well with the experimental results. In summary, the present work proposes an alternate way to initiate the MCR-ALS analysis.
Quantum computing and probability.
Ferry, David K
2009-11-25
Over the past two decades, quantum computing has become a popular and promising approach to trying to solve computationally difficult problems. Missing in many descriptions of quantum computing is just how probability enters into the process. Here, we discuss some simple examples of how uncertainty and probability enter, and how this and the ideas of quantum computing challenge our interpretations of quantum mechanics. It is found that this uncertainty can lead to intrinsic decoherence, and this raises challenges for error correction.
Elements of quantum probability
Kummerer, B.; Maassen, Hans
1996-01-01
This is an introductory article presenting some basic ideas of quantum probability. From a discussion of simple experiments with polarized light and a card game we deduce the necessity of extending the body of classical probability theory. For a class of systems, containing classical systems with finitely many states, a probabilistic model is developed. It can describe, in particular, the polarization experiments. Some examples of quantum coin tosses are discussed, closely related to V.F.R....
Probability in quantum mechanics
Directory of Open Access Journals (Sweden)
J. G. Gilson
1982-01-01
Full Text Available By using a fluid theory which is an alternative to quantum theory but from which the latter can be deduced exactly, the long-standing problem of how quantum mechanics is related to stochastic processes is studied. It can be seen how the Schrödinger probability density has a relationship to time spent on small sections of an orbit, just as the probability density has in some classical contexts.
7th High Dimensional Probability Meeting
Mason, David; Reynaud-Bouret, Patricia; Rosinski, Jan
2016-01-01
This volume collects selected papers from the 7th High Dimensional Probability meeting held at the Institut d'Études Scientifiques de Cargèse (IESC) in Corsica, France. High Dimensional Probability (HDP) is an area of mathematics that includes the study of probability distributions and limit theorems in infinite-dimensional spaces such as Hilbert spaces and Banach spaces. The most remarkable feature of this area is that it has resulted in the creation of powerful new tools and perspectives, whose range of application has led to interactions with other subfields of mathematics, statistics, and computer science. These include random matrices, nonparametric statistics, empirical processes, statistical learning theory, concentration of measure phenomena, strong and weak approximations, functional estimation, combinatorial optimization, and random graphs. The contributions in this volume show that HDP theory continues to thrive and develop new tools, methods, techniques and perspectives to analyze random phenome...
The perception of probability.
Gallistel, C R; Krishan, Monika; Liu, Ye; Miller, Reilly; Latham, Peter E
2014-01-01
We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Correlations and Non-Linear Probability Models
DEFF Research Database (Denmark)
Breen, Richard; Holm, Anders; Karlson, Kristian Bernt
2014-01-01
Although the parameters of logit and probit and other non-linear probability models are often explained and interpreted in relation to the regression coefficients of an underlying linear latent variable model, we argue that they may also be usefully interpreted in terms of the correlations between...... the dependent variable of the latent variable model and its predictor variables. We show how this correlation can be derived from the parameters of non-linear probability models, develop tests for the statistical significance of the derived correlation, and illustrate its usefulness in two applications. Under...... certain circumstances, which we explain, the derived correlation provides a way of overcoming the problems inherent in cross-sample comparisons of the parameters of non-linear probability models....
Experimental Probability in Elementary School
Andrew, Lane
2009-01-01
Concepts in probability can be more readily understood if students are first exposed to probability via experiment. Performing probability experiments encourages students to develop understandings of probability grounded in real events, as opposed to merely computing answers based on formulae.
Improving Ranking Using Quantum Probability
Melucci, Massimo
2011-01-01
The paper shows that ranking information units by quantum probability differs from ranking them by classical probability provided the same data used for parameter estimation. As probability of detection (also known as recall or power) and probability of false alarm (also known as fallout or size) measure the quality of ranking, we point out and show that ranking by quantum probability yields higher probability of detection than ranking by classical probability provided a given probability of ...
Collision Probability Analysis
DEFF Research Database (Denmark)
Hansen, Peter Friis; Pedersen, Preben Terndrup
1998-01-01
It is the purpose of this report to apply a rational model for prediction of ship-ship collision probabilities as function of the ship and the crew characteristics and the navigational environment for MS Dextra sailing on a route between Cadiz and the Canary Islands.The most important ship and crew...... characteristics are: ship speed, ship manoeuvrability, the layout of the navigational bridge, the radar system, the number and the training of navigators, the presence of a look out etc. The main parameters affecting the navigational environment are ship traffic density, probability distributions of wind speeds...... probability, i.e. a study of the navigator's role in resolving critical situations, a causation factor is derived as a second step.The report documents the first step in a probabilistic collision damage analysis. Future work will inlcude calculation of energy released for crushing of structures giving...
Classic Problems of Probability
Gorroochurn, Prakash
2012-01-01
"A great book, one that I will certainly add to my personal library."—Paul J. Nahin, Professor Emeritus of Electrical Engineering, University of New Hampshire Classic Problems of Probability presents a lively account of the most intriguing aspects of statistics. The book features a large collection of more than thirty classic probability problems which have been carefully selected for their interesting history, the way they have shaped the field, and their counterintuitive nature. From Cardano's 1564 Games of Chance to Jacob Bernoulli's 1713 Golden Theorem to Parrondo's 1996 Perplexin
Energy Technology Data Exchange (ETDEWEB)
Carr, D.B.; Tolley, H.D.
1982-12-01
This paper investigates procedures for univariate nonparametric estimation of tail probabilities. Extrapolated values for tail probabilities beyond the data are also obtained based on the shape of the density in the tail. Several estimators which use exponential weighting are described. These are compared in a Monte Carlo study to nonweighted estimators, to the empirical cdf, to an integrated kernel, to a Fourier series estimate, to a penalized likelihood estimate and a maximum likelihood estimate. Selected weighted estimators are shown to compare favorably to many of these standard estimators for the sampling distributions investigated.
Introduction to imprecise probabilities
Augustin, Thomas; de Cooman, Gert; Troffaes, Matthias C M
2014-01-01
In recent years, the theory has become widely accepted and has been further developed, but a detailed introduction is needed in order to make the material available and accessible to a wide audience. This will be the first book providing such an introduction, covering core theory and recent developments which can be applied to many application areas. All authors of individual chapters are leading researchers on the specific topics, assuring high quality and up-to-date contents. An Introduction to Imprecise Probabilities provides a comprehensive introduction to imprecise probabilities, includin
Boezen, HM; Postma, DS; Schouten, JP; Kerstjens, HAM; Rijcken, B
We investigated the coherence of bronchial hyperresponsiveness (BHR) and peak expiratory flow (PEF) variability in their relation to allergy markers and respiratory symptoms in 399 subjects (20-70 yr). Bronchial hyperresponsiveness to methacholine was defined by both the provocative dose causing a
Siegelaar, S. E.; Kulik, W.; van Lenthe, H.; Mukherjee, R.; Hoekstra, J. B. L.; DeVries, J. H.
2009-01-01
To assess the effect of three times daily mealtime inhaled insulin therapy compared with once daily basal insulin glargine therapy on 72-h glucose profiles, glucose variability and oxidative stress in type 2 diabetes patients. In an inpatient crossover study, 40 subjects with type 2 diabetes were
Integration, measure and probability
Pitt, H R
2012-01-01
Introductory treatment develops the theory of integration in a general context, making it applicable to other branches of analysis. More specialized topics include convergence theorems and random sequences and functions. 1963 edition.
Plotnitsky, Arkady
2010-01-01
Offers an exploration of the relationships between epistemology and probability in the work of Niels Bohr, Werner Heisenberg, and Erwin Schrodinger; in quantum mechanics; and in modern physics. This book considers the implications of these relationships and of quantum theory for our understanding of the nature of thinking and knowledge in general
Huygens' foundations of probability
Freudenthal, Hans
It is generally accepted that Huygens based probability on expectation. The term “expectation,” however, stems from Van Schooten's Latin translation of Huygens' treatise. A literal translation of Huygens' Dutch text shows more clearly what Huygens actually meant and how he proceeded.
Counterexamples in probability
Stoyanov, Jordan M
2013-01-01
While most mathematical examples illustrate the truth of a statement, counterexamples demonstrate a statement's falsity. Enjoyable topics of study, counterexamples are valuable tools for teaching and learning. The definitive book on the subject in regards to probability, this third edition features the author's revisions and corrections plus a substantial new appendix.
Probably Almost Bayes Decisions
DEFF Research Database (Denmark)
Anoulova, S.; Fischer, Paul; Poelt, S.
1996-01-01
In this paper, we investigate the problem of classifying objects which are given by feature vectors with Boolean entries. Our aim is to "(efficiently) learn probably almost optimal classifications" from examples. A classical approach in pattern recognition uses empirical estimations of the Bayesian...
Univariate Probability Distributions
Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.
2012-01-01
We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 3; Issue 4. The Theory of Probability. Andrei Nikolaevich Kolmogorov. Classics Volume 3 Issue 4 April 1998 pp 103-112. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/003/04/0103-0112. Author Affiliations.
Probability Theory Without Tears!
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 2. Probability Theory Without Tears! S Ramasubramanian. Book Review Volume 1 Issue 2 February 1996 pp 115-116. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/001/02/0115-0116 ...
African Journals Online (AJOL)
Willem Scholtz
internet – the (probably mostly white) public's interest in the so-called Border War is ostensibly at an all-time high. By far most of the publications are written by ex- ... understanding of this very important episode in the history of Southern Africa. It was, therefore, with some anticipation that one waited for this book, which.
Indian Academy of Sciences (India)
important practical applications in statistical quality control. Of a similar kind are the laws of probability for the scattering of missiles, which are basic in the ..... deviations for different ranges for each type of gun and of shell are found empirically in firing practice on an artillery range. But the subsequent solution of all possible ...
Takemura, Kazuhisa; Murakami, Hajime
2016-01-01
A probability weighting function (w(p)) is considered to be a nonlinear function of probability (p) in behavioral decision theory. This study proposes a psychophysical model of probability weighting functions derived from a hyperbolic time discounting model and a geometric distribution. The aim of the study is to show probability weighting functions from the point of view of waiting time for a decision maker. Since the expected value of a geometrically distributed random variable X is 1/p, we formulized the probability weighting function of the expected value model for hyperbolic time discounting as w(p) = (1 - k log p)(-1). Moreover, the probability weighting function is derived from Loewenstein and Prelec's (1992) generalized hyperbolic time discounting model. The latter model is proved to be equivalent to the hyperbolic-logarithmic weighting function considered by Prelec (1998) and Luce (2001). In this study, we derive a model from the generalized hyperbolic time discounting model assuming Fechner's (1860) psychophysical law of time and a geometric distribution of trials. In addition, we develop median models of hyperbolic time discounting and generalized hyperbolic time discounting. To illustrate the fitness of each model, a psychological experiment was conducted to assess the probability weighting and value functions at the level of the individual participant. The participants were 50 university students. The results of individual analysis indicated that the expected value model of generalized hyperbolic discounting fitted better than previous probability weighting decision-making models. The theoretical implications of this finding are discussed.
Using High-Probability Foods to Increase the Acceptance of Low-Probability Foods
Meier, Aimee E.; Fryling, Mitch J.; Wallace, Michele D.
2012-01-01
Studies have evaluated a range of interventions to treat food selectivity in children with autism and related developmental disabilities. The high-probability instructional sequence is one intervention with variable results in this area. We evaluated the effectiveness of a high-probability sequence using 3 presentations of a preferred food on…
Frič, Roman; Papčo, Martin
2010-12-01
Motivated by IF-probability theory (intuitionistic fuzzy), we study n-component probability domains in which each event represents a body of competing components and the range of a state represents a simplex S n of n-tuples of possible rewards-the sum of the rewards is a number from [0,1]. For n=1 we get fuzzy events, for example a bold algebra, and the corresponding fuzzy probability theory can be developed within the category ID of D-posets (equivalently effect algebras) of fuzzy sets and sequentially continuous D-homomorphisms. For n=2 we get IF-events, i.e., pairs ( μ, ν) of fuzzy sets μ, ν∈[0,1] X such that μ( x)+ ν( x)≤1 for all x∈ X, but we order our pairs (events) coordinatewise. Hence the structure of IF-events (where ( μ 1, ν 1)≤( μ 2, ν 2) whenever μ 1≤ μ 2 and ν 2≤ ν 1) is different and, consequently, the resulting IF-probability theory models a different principle. The category ID is cogenerated by I=[0,1] (objects of ID are subobjects of powers I X ), has nice properties and basic probabilistic notions and constructions are categorical. For example, states are morphisms. We introduce the category S n D cogenerated by Sn=\\{(x1,x2,ldots ,xn)in In;sum_{i=1}nxi≤ 1\\} carrying the coordinatewise partial order, difference, and sequential convergence and we show how basic probability notions can be defined within S n D.
Directory of Open Access Journals (Sweden)
Zhuo Chen
2015-01-01
Full Text Available Objective. The aim of this systematic review is to evaluate effect of Chinese medicine combined with conventional therapy on blood pressure variability (BPV in hypertension patients. Methods. All randomized clinical trials (RCTs comparing Chinese medicine with no intervention or placebo on the basis of conventional therapy were included. Data extraction, analyses, and quality assessment were performed according to the Cochrane standards. Results. We included 13 RCTs and assessed risk of bias for all the trials. Chinese medicine has a significant effect in lowering blood pressure (BP, reducing BPV in the form of standard deviation (SD or coefficient of variability (CV, improving nighttime BP decreased rate, and reversing abnormal rhythm of BP. Conclusions. Chinese medicine was safe and showed beneficial effects on BPV in hypertension patients. However, more rigorous trials with high quality are warranted to give high level of evidence before recommending Chinese medicine as an alternative or complementary medicine to improve BPV in hypertension patients.
Negative probability in the framework of combined probability
Burgin, Mark
2013-01-01
Negative probability has found diverse applications in theoretical physics. Thus, construction of sound and rigorous mathematical foundations for negative probability is important for physics. There are different axiomatizations of conventional probability. So, it is natural that negative probability also has different axiomatic frameworks. In the previous publications (Burgin, 2009; 2010), negative probability was mathematically formalized and rigorously interpreted in the context of extende...
Probability learning and Piagetian probability conceptions in children 5 to 12 years old.
Kreitler, S; Zigler, E; Kreitler, H
1989-11-01
This study focused on the relations between performance on a three-choice probability-learning task and conceptions of probability as outlined by Piaget concerning mixture, normal distribution, random selection, odds estimation, and permutations. The probability-learning task and four Piagetian tasks were administered randomly to 100 male and 100 female, middle SES, average IQ children in three age groups (5 to 6, 8 to 9, and 11 to 12 years old) from different schools. Half the children were from Middle Eastern backgrounds, and half were from European or American backgrounds. As predicted, developmental level of probability thinking was related to performance on the probability-learning task. The more advanced the child's probability thinking, the higher his or her level of maximization and hypothesis formulation and testing and the lower his or her level of systematically patterned responses. The results suggest that the probability-learning and Piagetian tasks assess similar cognitive skills and that performance on the probability-learning task reflects a variety of probability concepts.
Paradoxes in probability theory
Eckhardt, William
2013-01-01
Paradoxes provide a vehicle for exposing misinterpretations and misapplications of accepted principles. This book discusses seven paradoxes surrounding probability theory. Some remain the focus of controversy; others have allegedly been solved, however the accepted solutions are demonstrably incorrect. Each paradox is shown to rest on one or more fallacies. Instead of the esoteric, idiosyncratic, and untested methods that have been brought to bear on these problems, the book invokes uncontroversial probability principles, acceptable both to frequentists and subjectivists. The philosophical disputation inspired by these paradoxes is shown to be misguided and unnecessary; for instance, startling claims concerning human destiny and the nature of reality are directly related to fallacious reasoning in a betting paradox, and a problem analyzed in philosophy journals is resolved by means of a computer program.
Probability, Nondeterminism and Concurrency
DEFF Research Database (Denmark)
Varacca, Daniele
reveals the computational intuition lying behind the mathematics. In the second part of the thesis we provide an operational reading of continuous valuations on certain domains (the distributive concrete domains of Kahn and Plotkin) through the model of probabilistic event structures. Event structures......Nondeterminism is modelled in domain theory by the notion of a powerdomain, while probability is modelled by that of the probabilistic powerdomain. Some problems arise when we want to combine them in order to model computation in which both nondeterminism and probability are present. In particular...... there is no categorical distributive law between them. We introduce the powerdomain of indexed valuations which modifies the usual probabilistic powerdomain to take more detailed account of where probabilistic choices are made. We show the existence of a distributive law between the powerdomain of indexed valuations...
Waste Package Misload Probability
Energy Technology Data Exchange (ETDEWEB)
J.K. Knudsen
2001-11-20
The objective of this calculation is to calculate the probability of occurrence for fuel assembly (FA) misloads (i.e., Fa placed in the wrong location) and FA damage during FA movements. The scope of this calculation is provided by the information obtained from the Framatome ANP 2001a report. The first step in this calculation is to categorize each fuel-handling events that occurred at nuclear power plants. The different categories are based on FAs being damaged or misloaded. The next step is to determine the total number of FAs involved in the event. Using the information, a probability of occurrence will be calculated for FA misload and FA damage events. This calculation is an expansion of preliminary work performed by Framatome ANP 2001a.
Measurement uncertainty and probability
Willink, Robin
2013-01-01
A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.
Probability theory and applications
Hsu, Elton P
1999-01-01
This volume, with contributions by leading experts in the field, is a collection of lecture notes of the six minicourses given at the IAS/Park City Summer Mathematics Institute. It introduces advanced graduates and researchers in probability theory to several of the currently active research areas in the field. Each course is self-contained with references and contains basic materials and recent results. Topics include interacting particle systems, percolation theory, analysis on path and loop spaces, and mathematical finance. The volume gives a balanced overview of the current status of probability theory. An extensive bibliography for further study and research is included. This unique collection presents several important areas of current research and a valuable survey reflecting the diversity of the field.
Superpositions of probability distributions.
Jizba, Petr; Kleinert, Hagen
2008-09-01
Probability distributions which can be obtained from superpositions of Gaussian distributions of different variances v=sigma;{2} play a favored role in quantum theory and financial markets. Such superpositions need not necessarily obey the Chapman-Kolmogorov semigroup relation for Markovian processes because they may introduce memory effects. We derive the general form of the smearing distributions in v which do not destroy the semigroup property. The smearing technique has two immediate applications. It permits simplifying the system of Kramers-Moyal equations for smeared and unsmeared conditional probabilities, and can be conveniently implemented in the path integral calculus. In many cases, the superposition of path integrals can be evaluated much easier than the initial path integral. Three simple examples are presented, and it is shown how the technique is extended to quantum mechanics.
Comparing linear probability model coefficients across groups
DEFF Research Database (Denmark)
Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt
2015-01-01
This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....
Concurrency meets probability: theory and practice (abstract)
Katoen, Joost P.
Treating random phenomena in concurrency theory has a long tradition. Petri nets [18, 10] and process algebras [14] have been extended with probabilities. The same applies to behavioural semantics such as strong and weak (bi)simulation [1], and testing pre-orders [5]. Beautiful connections between
Sampling, Probability Models and Statistical Reasoning -RE ...
Indian Academy of Sciences (India)
eligible voters who support a particular political party. A random sample of size n is selected from this population and suppose k voters support this party. What is a good estimate of the required proportion? How do we obtain a probability model for the experi- ment just conducted? Let us examine the following simple ex-.
Directory of Open Access Journals (Sweden)
Peter M Wayne
2015-06-01
Full Text Available BACKGROUND: Tai Chi (TC exercise improves balance and reduces falls in older, health-impaired adults. TC’s impact on dual task (DT gait parameters predictive of falls, especially in healthy active older adults, however, is unknown.PURPOSE: To compare differences in usual and DT gait between long-term TC-expert practitioners and age-/gender-matched TC-naïve adults, and to determine the effects of short-term TC training on gait in healthy, non-sedentary older adults. METHODS: A cross-sectional study compared gait in healthy TC-naïve and TC-expert (24.5±12 yrs experience older adults. TC-naïve adults then completed a 6-month, two-arm, wait-list randomized clinical trial of TC training. Gait speed and stride time variability (% was assessed during 90 sec trials of undisturbed and cognitive DT (serial-subtractions conditions. RESULTS: During DT, gait speed decreased (p<0.003 and stride time variability increased (p<0.004 in all groups. Cross-sectional comparisons indicated that stride time variability was lower in the TC-expert vs. TC-naïve group, significantly so during DT (2.11% vs. 2.55%; p=0.027; in contrast, gait speed during both undisturbed and DT conditions did not differ between groups. Longitudinal analyses of TC-naïve adults randomized to 6 months of TC training or usual care identified improvement in DT gait speed in both groups. A small improvement in DT stride time variability (effect size = 0.2 was estimated with TC training, but no significant differences between groups were observed. Potentially important improvements after TC training could not be excluded in this small study. CONCLUSIONS: In healthy active older adults, positive effects of short- and long-term TC were observed only under cognitively challenging DT conditions and only for stride time variability. DT stride variability offers a potentially sensitive metric for monitoring TC’s impact on fall risk with healthy older adults.
Structural Minimax Probability Machine.
Gu, Bin; Sun, Xingming; Sheng, Victor S
2017-07-01
Minimax probability machine (MPM) is an interesting discriminative classifier based on generative prior knowledge. It can directly estimate the probabilistic accuracy bound by minimizing the maximum probability of misclassification. The structural information of data is an effective way to represent prior knowledge, and has been found to be vital for designing classifiers in real-world problems. However, MPM only considers the prior probability distribution of each class with a given mean and covariance matrix, which does not efficiently exploit the structural information of data. In this paper, we use two finite mixture models to capture the structural information of the data from binary classification. For each subdistribution in a finite mixture model, only its mean and covariance matrix are assumed to be known. Based on the finite mixture models, we propose a structural MPM (SMPM). SMPM can be solved effectively by a sequence of the second-order cone programming problems. Moreover, we extend a linear model of SMPM to a nonlinear model by exploiting kernelization techniques. We also show that the SMPM can be interpreted as a large margin classifier and can be transformed to support vector machine and maxi-min margin machine under certain special conditions. Experimental results on both synthetic and real-world data sets demonstrate the effectiveness of SMPM.
Whittle, Peter
1992-01-01
This book is a complete revision of the earlier work Probability which ap peared in 1970. While revised so radically and incorporating so much new material as to amount to a new text, it preserves both the aim and the approach of the original. That aim was stated as the provision of a 'first text in probability, de manding a reasonable but not extensive knowledge of mathematics, and taking the reader to what one might describe as a good intermediate level'. In doing so it attempted to break away from stereotyped applications, and consider applications of a more novel and significant character. The particular novelty of the approach was that expectation was taken as the prime concept, and the concept of expectation axiomatized rather than that of a probability measure. In the preface to the original text of 1970 (reproduced below, together with that to the Russian edition of 1982) I listed what I saw as the advantages of the approach in as unlaboured a fashion as I could. I also took the view that the text...
Algebraic polynomials with random coefficients
Directory of Open Access Journals (Sweden)
K. Farahmand
2002-01-01
Full Text Available This paper provides an asymptotic value for the mathematical expected number of points of inflections of a random polynomial of the form a0(ω+a1(ω(n11/2x+a2(ω(n21/2x2+…an(ω(nn1/2xn when n is large. The coefficients {aj(w}j=0n, w∈Ω are assumed to be a sequence of independent normally distributed random variables with means zero and variance one, each defined on a fixed probability space (A,Ω,Pr. A special case of dependent coefficients is also studied.
Frequentist probability and frequentist statistics
Energy Technology Data Exchange (ETDEWEB)
Neyman, J.
1977-01-01
A brief, nontechnical outline is given of the author's views on the ''frequentist'' theory of probability and the ''frequentist'' theory of statistics; their applications are illustrated in a few domains of study of nature. The phenomenon of apparently stable relative frequencies as the source of the frequentist theories of probability and statistics is taken up first. Three steps are set out: empirical establishment of apparently stable long-run relative frequencies of events judged interesting, as they develop in nature; guessing and then verifying the chance mechanism, the repeated operation of which produced the observed frequencies--this is a problem of frequentist probability theory; using the hypothetical chance mechanism of the phenomenon studied to deduce rules of adjusting our actions to the observations to ensure the highest ''measure'' of ''success''. Illustrations of the three steps are given. The theory of testing statistical hypotheses is sketched: basic concepts, simple and composite hypotheses, hypothesis tested, importance of the power of the test used, practical applications of the theory of testing statistical hypotheses. Basic ideas and an example of the randomization of experiments are discussed, and an ''embarrassing'' example is given. The problem of statistical estimation is sketched: example of an isolated problem, example of connected problems treated routinely, empirical Bayes theory, point estimation. The theory of confidence intervals is outlined: basic concepts, anticipated misunderstandings, construction of confidence intervals: regions of acceptance. Finally, the theory of estimation by confidence intervals or regions is considered briefly. 4 figures. (RWR)
Papakostas, George I; Martinson, Max A; Fava, Maurizio; Iovieno, Nadia
2016-05-01
The aim of this work is to compare the efficacy of pharmacologic agents for the treatment of major depressive disorder (MDD) and bipolar depression. MEDLINE/PubMed databases were searched for studies published in English between January 1980 and September 2014 by cross-referencing the search term placebo with each of the antidepressant agents identified and with bipolar. The search was supplemented by manual bibliography review. We selected double-blind, randomized, placebo-controlled trials of antidepressant monotherapies for the treatment of MDD and of oral drug monotherapies for the treatment of bipolar depression. 196 trials in MDD and 19 trials in bipolar depression were found eligible for inclusion in our analysis. Data were extracted by one of the authors and checked for accuracy by a second one. Data extracted included year of publication, number of patients randomized, probability of receiving placebo, duration of the trial, baseline symptom severity, dosing schedule, study completion rates, and clinical response rates. Response rates for drug versus placebo in trials of MDD and bipolar depression were 52.7% versus 37.5% and 54.7% versus 40.5%, respectively. The random-effects meta-analysis indicated that drug therapy was more effective than placebo in both MDD (risk ratio for response = 1.373; P depression (risk ratio = 1.257; P depression trials in favor of MDD (P = .008). Although a statistically significantly greater treatment effect size was noted in MDD relative to bipolar depression studies, the absolute magnitude of the difference was numerically small. Therefore, the present study suggests no clinically significant differences in the overall short-term efficacy of pharmacologic monotherapies for MDD and bipolar depression. © Copyright 2016 Physicians Postgraduate Press, Inc.
Sharma, Vivek Kumar; Subramanian, Senthil Kumar; Radhakrishnan, Krishnakumar; Rajendran, Rajathi; Ravindran, Balasubramanian Sulur; Arunachalam, Vinayathan
2017-05-01
Physical inactivity contributes to many health issues. The WHO-recommended physical activity for adolescents encompasses aerobic, resistance, and bone strengthening exercises aimed at achieving health-related physical fitness. Heart rate variability (HRV) and maximal aerobic capacity (VO2max) are considered as noninvasive measures of cardiovascular health. The objective of this study is to compare the effect of structured and unstructured physical training on maximal aerobic capacity and HRV among adolescents. We designed a single blinded, parallel, randomized active-controlled trial (Registration No. CTRI/2013/08/003897) to compare the physiological effects of 6 months of globally recommended structured physical activity (SPA), with that of unstructured physical activity (USPA) in healthy school-going adolescents. We recruited 439 healthy student volunteers (boys: 250, girls: 189) in the age group of 12-17 years. Randomization across the groups was done using age and gender stratified randomization method, and the participants were divided into two groups: SPA (n=219, boys: 117, girls: 102) and USPA (n=220, boys: 119, girls: 101). Depending on their training status and gender the participants in both SPA and USPA groups were further subdivided into the following four sub-groups: SPA athlete boys (n=22) and girls (n=17), SPA nonathlete boys (n=95) and girls (n=85), USPA athlete boys (n=23) and girls (n=17), and USPA nonathlete boys (n=96) and girls (n=84). We recorded HRV, body fat%, and VO2 max using Rockport Walk Fitness test before and after the intervention. Maximum aerobic capacity and heart rate variability increased significantly while heart rate, systolic blood pressure, diastolic blood pressure, and body fat percentage decreased significantly after both SPA and USPA intervention. However, the improvement was more in SPA as compared to USPA. SPA is more beneficial for improving cardiorespiratory fitness, HRV, and reducing body fat percentage in terms of
Indian Academy of Sciences (India)
IAS Admin
gambling problems in the 18th century. Europe. random) phenomena, especially those evolving over time. The study of motion of physical objects over time by. Newton led to his famous three laws of motion as well as many important developments in the theory of ordi- nary differential equations. Similarly, the construction ...
Flueck, Joelle Leonie; Schaufelberger, Fabienne; Lienert, Martina; Schäfer Olstad, Daniela; Wilhelm, Matthias; Perret, Claudio
2016-01-01
Caffeine increases sympathetic nerve activity in healthy individuals. Such modulation of nervous system activity can be tracked by assessing the heart rate variability. This study aimed to investigate the influence of caffeine on time- and frequency-domain heart rate variability parameters, blood pressure and tidal volume in paraplegic and tetraplegic compared to able-bodied participants. Heart rate variability was measured in supine and sitting position pre and post ingestion of either placebo or 6 mg caffeine in 12 able-bodied, 9 paraplegic and 7 tetraplegic participants in a placebo-controlled, randomized and double-blind study design. Metronomic breathing was applied (0.25 Hz) and tidal volume was recorded during heart rate variability assessment. Blood pressure, plasma caffeine and epinephrine concentrations were analyzed pre and post ingestion. Most parameters of heart rate variability did not significantly change post caffeine ingestion compared to placebo. Tidal volume significantly increased post caffeine ingestion in able-bodied (p = 0.021) and paraplegic (p = 0.036) but not in tetraplegic participants (p = 0.34). Systolic and diastolic blood pressure increased significantly post caffeine in able-bodied (systolic: p = 0.003; diastolic: p = 0.021) and tetraplegic (systolic: p = 0.043; diastolic: p = 0.042) but not in paraplegic participants (systolic: p = 0.09; diastolic: p = 0.33). Plasma caffeine concentrations were significantly increased post caffeine ingestion in all three groups of participants (pcaffeine on the autonomic nervous system seems to depend on the level of lesion and the extent of the impairment. Therefore, tetraplegic participants may be less influenced by caffeine ingestion. ClinicalTrials.gov NCT02083328.
Monthus, Cécile; Garel, Thomas
2008-02-01
We study the wetting transition and the directed polymer delocalization transition on diamond hierarchical lattices. These two phase transitions with frozen disorder correspond to the critical points of quadratic renormalizations of the partition function. (These exact renormalizations on diamond lattices can also be considered as approximate Migdal-Kadanoff renormalizations for hypercubic lattices.) In terms of the rescaled partition function z=Z/Z(typ) , we find that the critical point corresponds to a fixed point distribution with a power-law tail P(c)(z) ~ Phi(ln z)/z(1+mu) as z-->+infinity [up to some subleading logarithmic correction Phi(ln z)], so that all moments z(n) with n>mu diverge. For the wetting transition, the first moment diverges z=+infinity (case 0infinity (case 1fixed point distribution coincides with the transfer matrix describing a directed polymer on the Cayley tree, but the random weights determined by the fixed point distribution P(c)(z) are broadly distributed. This induces some changes in the traveling wave solutions with respect to the usual case of more narrow distributions.
Badura-Brack, Amy S; Naim, Reut; Ryan, Tara J; Levy, Ofir; Abend, Rany; Khanna, Maya M; McDermott, Timothy J; Pine, Daniel S; Bar-Haim, Yair
2015-12-01
Attention allocation to threat is perturbed in patients with posttraumatic stress disorder (PTSD), with some studies indicating excess attention to threat and others indicating fluctuations between threat vigilance and threat avoidance. The authors tested the efficacy of two alternative computerized protocols, attention bias modification and attention control training, for rectifying threat attendance patterns and reducing PTSD symptoms. Two randomized controlled trials compared the efficacy of attention bias modification and attention control training for PTSD: one in Israel Defense Forces veterans and one in U.S. military veterans. Both utilized variants of the dot-probe task, with attention bias modification designed to shift attention away from threat and attention control training balancing attention allocation between threat and neutral stimuli. PTSD symptoms, attention bias, and attention bias variability were measured before and after treatment. Both studies indicated significant symptom improvement after treatment, favoring attention control training. Additionally, both studies found that attention control training, but not attention bias modification, significantly reduced attention bias variability. Finally, a combined analysis of the two samples suggested that reductions in attention bias variability partially mediated improvement in PTSD symptoms. Attention control training may address aberrant fluctuations in attention allocation in PTSD, thereby reducing PTSD symptoms. Further study of treatment efficacy and its underlying neurocognitive mechanisms is warranted.
A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses
Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini
2012-01-01
The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…
Statistics, Probability and Chaos
Berliner, L. Mark
1992-01-01
The study of chaotic behavior has received substantial attention in many disciplines. Although often based on deterministic models, chaos is associated with complex, "random" behavior and forms of unpredictability. Mathematical models and definitions associated with chaos are reviewed. The relationship between the mathematics of chaos and probabilistic notions, including ergodic theory and uncertainty modeling, are emphasized. Popular data analytic methods appearing in the literature are disc...
Random walks on reductive groups
Benoist, Yves
2016-01-01
The classical theory of Random Walks describes the asymptotic behavior of sums of independent identically distributed random real variables. This book explains the generalization of this theory to products of independent identically distributed random matrices with real coefficients. Under the assumption that the action of the matrices is semisimple – or, equivalently, that the Zariski closure of the group generated by these matrices is reductive - and under suitable moment assumptions, it is shown that the norm of the products of such random matrices satisfies a number of classical probabilistic laws. This book includes necessary background on the theory of reductive algebraic groups, probability theory and operator theory, thereby providing a modern introduction to the topic.
Probability of pregnancy in beef heifers
Directory of Open Access Journals (Sweden)
D.P. Faria
2014-12-01
Full Text Available This study aimed to evaluate the influence of initial weight, initial age, average daily gain in initial weight, average daily gain in total weight and genetic group on the probability of pregnancy in primiparous females of the Nellore, 1/2 Simmental + 1/2 Nellore, and 3/4 Nellore + 1/4 Simmental genetic groups. Data were collected from the livestock file of the Farpal Farm, located in the municipality of Jaíba, Minas Gerais State, Brazil. The pregnancy diagnosis results (success = 1 and failure = 0 were used to determine the probability of pregnancy that was modeled using logistic regression by the Proc Logistic procedure available on SAS (Statistical..., 2004 software, from the regressor variables initial weight, average daily gain in initial weight, average daily gain in total weight, and genetic group. Initial weight (IW was the most important variable in the probability of pregnancy in heifers, and 1-kg increments in IW allowed for increases of 5.8, 9.8 and 3.4% in the probability of pregnancy in Nellore, 1/2 Simmental + 1/2 Nellore and, 3/4 Nellore + 1/4 Simmental heifers, respectively. The initial age influenced the probability of pregnancy in Nellore heifers. From the estimates of the effects of each variable it was possible to determine the minimum initial weights for each genetic group. This information can be used to monitor the development of heifers until the breeding season and increase the pregnancy rate.
Rodriguez-Galiano, Victor; Mendes, Maria Paula; Garcia-Soldado, Maria Jose; Chica-Olmo, Mario; Ribeiro, Luis
2014-04-01
Watershed management decisions need robust methods, which allow an accurate predictive modeling of pollutant occurrences. Random Forest (RF) is a powerful machine learning data driven method that is rarely used in water resources studies, and thus has not been evaluated thoroughly in this field, when compared to more conventional pattern recognition techniques key advantages of RF include: its non-parametric nature; high predictive accuracy; and capability to determine variable importance. This last characteristic can be used to better understand the individual role and the combined effect of explanatory variables in both protecting and exposing groundwater from and to a pollutant. In this paper, the performance of the RF regression for predictive modeling of nitrate pollution is explored, based on intrinsic and specific vulnerability assessment of the Vega de Granada aquifer. The applicability of this new machine learning technique is demonstrated in an agriculture-dominated area where nitrate concentrations in groundwater can exceed the trigger value of 50 mg/L, at many locations. A comprehensive GIS database of twenty-four parameters related to intrinsic hydrogeologic proprieties, driving forces, remotely sensed variables and physical-chemical variables measured in "situ", were used as inputs to build different predictive models of nitrate pollution. RF measures of importance were also used to define the most significant predictors of nitrate pollution in groundwater, allowing the establishment of the pollution sources (pressures). The potential of RF for generating a vulnerability map to nitrate pollution is assessed considering multiple criteria related to variations in the algorithm parameters and the accuracy of the maps. The performance of the RF is also evaluated in comparison to the logistic regression (LR) method using different efficiency measures to ensure their generalization ability. Prediction results show the ability of RF to build accurate models
Comparing coefficients of nested nonlinear probability models
DEFF Research Database (Denmark)
Kohler, Ulrich; Karlson, Kristian Bernt; Holm, Anders
2011-01-01
In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general decomposi......In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general...... decomposition method that is unaffected by the rescaling or attenuation bias that arise in cross-model comparisons in nonlinear models. It recovers the degree to which a control variable, Z, mediates or explains the relationship between X and a latent outcome variable, Y*, underlying the nonlinear probability...
Haavelmo's Probability Approach and the Cointegrated VAR
DEFF Research Database (Denmark)
Juselius, Katarina
dependent residuals, normalization, reduced rank, model selection, missing variables, simultaneity, autonomy and iden- ti…cation. Speci…cally the paper discusses (1) the conditions under which the VAR model represents a full probability formulation of a sample of time-series observations, (2...
Probabilities for Solar Siblings
Valtonen, Mauri; Bajkova, A. T.; Bobylev, V. V.; Mylläri, A.
2015-02-01
We have shown previously (Bobylev et al. Astron Lett 37:550-562, 2011) that some of the stars in the solar neighborhood today may have originated in the same star cluster as the Sun, and could thus be called Solar Siblings. In this work we investigate the sensitivity of this result to galactic models and to parameters of these models, and also extend the sample of orbits. There are a number of good candidates for the sibling category, but due to the long period of orbit evolution since the break-up of the birth cluster of the Sun, one can only attach probabilities of membership. We find that up to 10 % (but more likely around 1 %) of the members of the Sun's birth cluster could be still found within 100 pc from the Sun today.
Measure, integral and probability
Capiński, Marek
2004-01-01
Measure, Integral and Probability is a gentle introduction that makes measure and integration theory accessible to the average third-year undergraduate student. The ideas are developed at an easy pace in a form that is suitable for self-study, with an emphasis on clear explanations and concrete examples rather than abstract theory. For this second edition, the text has been thoroughly revised and expanded. New features include: · a substantial new chapter, featuring a constructive proof of the Radon-Nikodym theorem, an analysis of the structure of Lebesgue-Stieltjes measures, the Hahn-Jordan decomposition, and a brief introduction to martingales · key aspects of financial modelling, including the Black-Scholes formula, discussed briefly from a measure-theoretical perspective to help the reader understand the underlying mathematical framework. In addition, further exercises and examples are provided to encourage the reader to become directly involved with the material.
Directory of Open Access Journals (Sweden)
J. M. MUNUERA
1964-06-01
Full Text Available The material included in former two papers (SB and EF
which summs 3307 shocks corresponding to 2360 years, up to I960, was
reduced to a 50 years period by means the weight obtained for each epoch.
The weitliing factor is the ratio 50 and the amount of years for every epoch.
The frequency has been referred over basis VII of the international
seismic scale of intensity, for all cases in which the earthquakes are equal or
greater than VI and up to IX. The sum of products: frequency and parameters
previously exposed, is the probable frequency expected for the 50
years period.
On each active small square, we have made the corresponding computation
and so we have drawn the Map No 1, in percentage. The epicenters with
intensity since X to XI are plotted in the Map No 2, in order to present a
complementary information.
A table shows the return periods obtained for all data (VII to XI,
and after checking them with other computed from the first up to last shock,
a list includes the probable approximate return periods estimated for the area.
The solution, we suggest, is an appropriated form to express the seismic
contingent phenomenon and it improves the conventional maps showing
the equal intensity curves corresponding to the maximal values of given side.
Solution Methods for Structures with Random Properties Subject to Random Excitation
DEFF Research Database (Denmark)
Köylüoglu, H. U.; Nielsen, Søren R. K.; Cakmak, A. S.
This paper deals with the lower order statistical moments of the response of structures with random stiffness and random damping properties subject to random excitation. The arising stochastic differential equations (SDE) with random coefficients are solved by two methods, a second order...... perturbation approach and a Markovian method. The second order perturbation approach is grounded on the total probability theorem and can be compactly written. Moreover, the problem to be solved is independent of the dimension of the random variables involved. The Markovian approach suggests transforming...... the SDE with random coefficients with deterministic initial conditions to an equivalent nonlinear SDE with deterministic coefficient and random initial conditions. In both methods, the statistical moment equations are used. Hierarchy of statistical moments in the markovian approach is closed...
Directory of Open Access Journals (Sweden)
Fabrizio Maturo
2016-06-01
Full Text Available In practical applications relating to business and management sciences, there are many variables that, for their own nature, are better described by a pair of ordered values (i.e. financial data. By summarizing this measurement with a single value, there is a loss of information; thus, in these situations, data are better described by interval values rather than by single values. Interval arithmetic studies and analyzes this type of imprecision; however, if the intervals has no sharp boundaries, fuzzy set theory is the most suitable instrument. Moreover, fuzzy regression models are able to overcome some typical limitation of classical regression because they do not need the same strong assumptions. In this paper, we present a review of the main methods introduced in the literature on this topic and introduce some recent developments regarding the concept of randomness in fuzzy regression.
Yamamoto, Shuu'ichirou; Shuto, Yusuke; Sugahara, Satoshi
2010-04-01
In this paper, we present a variable-transconductance (gm) metal-oxide-semiconductor field-effect-transistor (VGm-MOSFET) architecture using a nonpolar resistive switching device (RSD) for nonvolatile bistable circuit applications. The architecture can be achieved by connecting an RSD to the source terminal of an ordinary MOSFET. The current drive capability of the VGm-MOSFET can be modified by resistance states of the connected RSD, which is a very useful function for nonvolatile bistable circuits, such as nonvolatile static random access memory (NV-SRAM) and nonvolatile flip-flop (NV-FF). NV-SRAM can be easily configured by connecting two additional VGm-MOSFETs to the storage nodes of a standard SRAM cell. Using our developed SPICE macromodel for nonpolar RSDs, successful circuit operations of the proposed NV-SRAM cell were confirmed.
Energy Technology Data Exchange (ETDEWEB)
Dong, Jing [ORNL; Mahmassani, Hani S. [Northwestern University, Evanston
2011-01-01
This paper proposes a methodology to produce random flow breakdown endogenously in a mesoscopic operational model, by capturing breakdown probability and duration. Based on previous research findings that probability of flow breakdown can be represented as a function of flow rate and the duration can be characterized by a hazard model. By generating random flow breakdown at various levels and capturing the traffic characteristics at the onset of the breakdown, the stochastic network simulation model provides a tool for evaluating travel time variability. The proposed model can be used for (1) providing reliability related traveler information; (2) designing ITS (intelligent transportation systems) strategies to improve reliability; and (3) evaluating reliability-related performance measures of the system.
Menezes, Regina; Rodriguez-Mateos, Ana; Kaltsatou, Antonia; González-Sarrías, Antonio; Greyling, Arno; Giannaki, Christoforos; Andres-Lacueva, Cristina; Milenkovic, Dragan; Gibney, Eileen R.; Dumont, Julie; Schär, Manuel; Garcia-Aloy, Mar; Palma-Duran, Susana Alejandra; Ruskovska, Tatjana; Maksimova, Viktorija; Combet, Emilie; Pinto, Paula
2017-01-01
Several epidemiological studies have linked flavonols with decreased risk of cardiovascular disease (CVD). However, some heterogeneity in the individual physiological responses to the consumption of these compounds has been identified. This meta-analysis aimed to study the effect of flavonol supplementation on biomarkers of CVD risk such as, blood lipids, blood pressure and plasma glucose, as well as factors affecting their inter-individual variability. Data from 18 human randomized controlled trials were pooled and the effect was estimated using fixed or random effects meta-analysis model and reported as difference in means (DM). Variability in the response of blood lipids to supplementation with flavonols was assessed by stratifying various population subgroups: age, sex, country, and health status. Results showed significant reductions in total cholesterol (DM = −0.10 mmol/L; 95% CI: −0.20, −0.01), LDL cholesterol (DM = −0.14 mmol/L; 95% CI: −0.21, 0.07), and triacylglycerol (DM = −0.10 mmol/L; 95% CI: −0.18, 0.03), and a significant increase in HDL cholesterol (DM = 0.05 mmol/L; 95% CI: 0.02, 0.07). A significant reduction was also observed in fasting plasma glucose (DM = −0.18 mmol/L; 95% CI: −0.29, −0.08), and in blood pressure (SBP: DM = −4.84 mmHg; 95% CI: −5.64, −4.04; DBP: DM = −3.32 mmHg; 95% CI: −4.09, −2.55). Subgroup analysis showed a more pronounced effect of flavonol intake in participants from Asian countries and in participants with diagnosed disease or dyslipidemia, compared to healthy and normal baseline values. In conclusion, flavonol consumption improved biomarkers of CVD risk, however, country of origin and health status may influence the effect of flavonol intake on blood lipid levels. PMID:28208791
Directory of Open Access Journals (Sweden)
Nuria eRuffini
2015-08-01
Full Text Available Context: Heart Rate Variability (HRV indicates how heart rate changes in response to inner and external stimuli. HRV is linked to health status and it is an indirect marker of the autonomic nervous system (ANS function. Objective: To investigate the influence of osteopathic manipulative treatment (OMT on ANS activity through changes of High Frequency, a heart rate variability index indicating the parasympathetic activity, in healthy subjects, compared with sham therapy and control group.Methods: Sixty-six healthy subjects, both male and female, were included in the present 3-armed randomized placebo controlled within subject cross-over single blinded study. Participants were asymptomatic adults, both smokers and non-smokers and not on medications. At enrollment subjects were randomized in 3 groups: A, B, C. Standardized structural evaluation followed by a patient need-based osteopathic treatment was performed in the first session of group A and in the second session of group B. Standardized evaluation followed by a protocoled sham treatment was provided in the second session of group A and in the first session of group B. No intervention was performed in the two sessions of group C, acting as a time-control. The trial was registered on clinicaltrials.gov identifier: NCT01908920.Main Outcomes Measures: HRV was calculated from electrocardiography before, during and after the intervention, for a total amount time of 25 minutes.Results: OMT engendered a statistically significant increase of parasympathetic activity, as shown by High Frequency rate (p<0.001, and decrease of sympathetic activity, as revealed by Low Frequency rate (p<0.01; results also showed a reduction of Low Frequency/High Frequency ratio (p<0.001 and Detrended fluctuation scaling exponent (p<0.05. Conclusions: Findings suggested that OMT can influence ANS activity increasing parasympathetic function and decreasing sympathetic activity, compared to sham therapy and control group.
Impact of controlling the sum of error probability in the sequential probability ratio test
Directory of Open Access Journals (Sweden)
Bijoy Kumarr Pradhan
2013-05-01
Full Text Available A generalized modified method is proposed to control the sum of error probabilities in sequential probability ratio test to minimize the weighted average of the two average sample numbers under a simple null hypothesis and a simple alternative hypothesis with the restriction that the sum of error probabilities is a pre-assigned constant to find the optimal sample size and finally a comparison is done with the optimal sample size found from fixed sample size procedure. The results are applied to the cases when the random variate follows a normal law as well as Bernoullian law.
Fifty challenging problems in probability with solutions
Mosteller, Frederick
1987-01-01
Can you solve the problem of ""The Unfair Subway""? Marvin gets off work at random times between 3 and 5 p.m. His mother lives uptown, his girlfriend downtown. He takes the first subway that comes in either direction and eats dinner with the one he is delivered to. His mother complains that he never comes to see her, but he says she has a 50-50 chance. He has had dinner with her twice in the last 20 working days. Explain. Marvin's adventures in probability are one of the fifty intriguing puzzles that illustrate both elementary ad advanced aspects of probability, each problem designed to chall
Return probability: Exponential versus Gaussian decay
Energy Technology Data Exchange (ETDEWEB)
Izrailev, F.M. [Instituto de Fisica, BUAP, Apdo. Postal J-48, 72570 Puebla (Mexico)]. E-mail: izrailev@sirio.ifuap.buap.mx; Castaneda-Mendoza, A. [Instituto de Fisica, BUAP, Apdo. Postal J-48, 72570 Puebla (Mexico)
2006-02-13
We analyze, both analytically and numerically, the time-dependence of the return probability in closed systems of interacting particles. Main attention is paid to the interplay between two regimes, one of which is characterized by the Gaussian decay of the return probability, and another one is the well-known regime of the exponential decay. Our analytical estimates are confirmed by the numerical data obtained for two models with random interaction. In view of these results, we also briefly discuss the dynamical model which was recently proposed for the implementation of a quantum computation.
Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model
Vallejo, Jonathon; Hejduk, Matt; Stamey, James
2015-01-01
We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.
Kojo, Takao; Ae, Ryusuke; Tsuboi, Satoshi; Nakamura, Yosikazu; Kitamura, Kunio
2017-03-01
This study analyzes differentials in the variables associated with the experience of artificial abortion (abortion) and use of contraception by age among women in Japan. The 2010 National Lifestyle and Attitudes Towards Sexual Behavior Survey was distributed to 2693 men and women aged 16-49 selected from the Japanese population using a two-stage random sampling procedure. From the 1540 respondents, we selected 700 women who reported having had sexual intercourse at least once. We used logistic regression to analyze how social and demographic factors were associated with the experience of abortion and contraceptive use. The abortion rate according to the survey was 19.3%. Of the 700 women in the sample, 6.9% had experienced two or more abortions. Logistic regression revealed that, although significant variables depended on age, a high level of education and discussions about contraceptive use with partners were negatively associated with the experience of abortion. Self-injury, approval of abortion and first sexual intercourse between the age of 10 and 19 were positively associated with the experience of abortion. Marriage, smoking and first sexual intercourse between the age of 10 and 19 were negatively associated with contraceptive use. Higher education and discussion of contraception with partners were positively associated with contraceptive use. To prevent unwanted pregnancy and abortion, social support and sexual education should be age-appropriate. It is vital to educate young people of the importance of discussing contraceptive use with their partners. © 2016 Japan Society of Obstetrics and Gynecology.
On the satisfiability of random regular signed SAT formulas
Laus, Christian
2011-01-01
Regular signed SAT is a variant of the well-known satisfiability problem in which the variables can take values in a fixed set V \\subset [0,1], and the `literals' have the form "x \\le a" or "x \\ge a". We answer some open question regarding random regular signed k-SAT formulas: the probability that a random formula is satisfiable increases with |V|; there is a constant upper bound on the ratio m/n of clauses m over variables n, beyond which a random formula is asypmtotically almost never satisfied; for k=2 and V=[0,1], there is a phase transition at m/n=2.
Renard, Eric; Dubois-Laforgue, Danièle; Guerci, Bruno
2011-12-01
This study compared the effects of insulin glargine and insulin detemir on blood glucose variability under clinical practice conditions in patients with type 1 diabetes (T1D) using glulisine as the mealtime insulin. This was a multicenter, crossover trial in 88 randomized T1D patients: 54 men and 34 women, 46.8±13.7 years old, with a duration of diabetes of 18±9 years and hemoglobin A1c level of 7.1±0.7%. The per-protocol population included 78 patients: 44 received glargine/detemir and 34 detemir/glargine in the first/second 16-week period, respectively. The primary end point was the coefficient of variation (CV) of fasting blood glucose (FBG). Secondary end points included variability of pre-dinner blood glucose, mean amplitude of glycemic excursions, mean of daily differences, and doses and number of daily insulin injections. The non-inferiority criterion was an insulin glargine/insulin detemir FBG CV ratio with a 95% confidence interval (CI) upper limit ≤1.25. The non-inferiority criterion was satisfied with a mean value of 1.016 (95% CI=0.970-1.065). Intention-to-treat analysis confirmed the non-inferiority with a 95% CI upper limit=1.062. No significant differences were found on secondary objectives, but there was a trend to higher doses and number of daily injections with insulin detemir. A total of eight (four glargine and four detemir) patients reported nine serious adverse events (including one severe episode of hypoglycemia). None of them was considered as related to basal insulins. Serious adverse events led to treatment discontinuation in two patients of the detemir group and none in the glargine group. In T1D patients under clinical practice conditions, insulin glargine was non-inferior to insulin detemir regarding blood glucose variability, as assessed by CV of FBG.
Al-Qahtani, Fawaz S.
2011-09-01
In this paper, we investigate the outage performance of a dual-hop relaying systems with partial relay selection and feedback delay. The analysis considers the case of Rayleigh fading channels when the relaying station as well as the destination undergo mutually independent interfering signals. Particularly, we derive the cumulative distribution function (c.d.f.) of a new type of random variable involving sum of multiple independent exponential random variables, based on which, we present closed-form expressions for the exact outage probability of a fixed amplify-and-forward (AF) and decode-and-forward (DF) relaying protocols. Numerical results are provided to illustrate the joint effect of the delayed feedback and co-channel interference on the outage probability. © 2011 IEEE.
Quantization of Prior Probabilities for Hypothesis Testing
Varshney, Kush R.; Varshney, Lav R.
2008-01-01
Bayesian hypothesis testing is investigated when the prior probabilities of the hypotheses, taken as a random vector, are quantized. Nearest neighbor and centroid conditions are derived using mean Bayes risk error as a distortion measure for quantization. A high-resolution approximation to the distortion-rate function is also obtained. Human decision making in segregated populations is studied assuming Bayesian hypothesis testing with quantized priors.
Calculating the Probability of Returning a Loan with Binary Probability Models
Directory of Open Access Journals (Sweden)
Julian Vasilev
2014-12-01
Full Text Available The purpose of this article is to give a new approach in calculating the probability of returning a loan. A lot of factors affect the value of the probability. In this article by using statistical and econometric models some influencing factors are proved. The main approach is concerned with applying probit and logit models in loan management institutions. A new aspect of the credit risk analysis is given. Calculating the probability of returning a loan is a difficult task. We assume that specific data fields concerning the contract (month of signing, year of signing, given sum and data fields concerning the borrower of the loan (month of birth, year of birth (age, gender, region, where he/she lives may be independent variables in a binary logistics model with a dependent variable “the probability of returning a loan”. It is proved that the month of signing a contract, the year of signing a contract, the gender and the age of the loan owner do not affect the probability of returning a loan. It is proved that the probability of returning a loan depends on the sum of contract, the remoteness of the loan owner and the month of birth. The probability of returning a loan increases with the increase of the given sum, decreases with the proximity of the customer, increases for people born in the beginning of the year and decreases for people born at the end of the year.
Falk, Ruma; Kendig, Keith
2013-01-01
Two contestants debate the notorious probability problem of the sex of the second child. The conclusions boil down to explication of the underlying scenarios and assumptions. Basic principles of probability theory are highlighted.
Directory of Open Access Journals (Sweden)
Bogdan Ozga-Zielinski
2016-06-01
New hydrological insights for the region: The results indicated that the 2D normal probability distribution model gives a better probabilistic description of snowmelt floods characterized by the 2-dimensional random variable (Qmax,f, Vf compared to the elliptical Gaussian copula and Archimedean 1-parameter Gumbel–Hougaard copula models, in particular from the view point of probability of exceedance as well as complexity and time of computation. Nevertheless, the copula approach offers a new perspective in estimating the 2D probability distribution for multidimensional random variables. Results showed that the 2D model for snowmelt floods built using the Gumbel–Hougaard copula is much better than the model built using the Gaussian copula.
Ruffini, Nuria; D'Alessandro, Giandomenico; Mariani, Nicolò; Pollastrelli, Alberto; Cardinali, Lucia; Cerritelli, Francesco
2015-01-01
Heart Rate Variability (HRV) indicates how heart rate changes in response to inner and external stimuli. HRV is linked to health status and it is an indirect marker of the autonomic nervous system (ANS) function. To investigate the influence of osteopathic manipulative treatment (OMT) on cardiac autonomic modulation in healthy subjects, compared with sham therapy and control group. Sixty-six healthy subjects, both male and female, were included in the present 3-armed randomized placebo controlled within subject cross-over single blinded study. Participants were asymptomatic adults (26.7 ± 8.4 y, 51% male, BMI 18.5 ± 4.8), both smokers and non-smokers and not on medications. At enrollment subjects were randomized in three groups: A, B, C. Standardized structural evaluation followed by a patient need-based osteopathic treatment was performed in the first session of group A and in the second session of group B. Standardized evaluation followed by a protocoled sham treatment was provided in the second session of group A and in the first session of group B. No intervention was performed in the two sessions of group C, acting as a time-control. The trial was registered on clinicaltrials.gov identifier: NCT01908920. HRV was calculated from electrocardiography before, during and after the intervention, for a total amount time of 25 min and considering frequency domain as well as linear and non-linear methods as outcome measures. OMT engendered a statistically significant increase of parasympathetic activity, as shown by High Frequency power (p ANS activity increasing parasympathetic function and decreasing sympathetic activity, compared to sham therapy and control group.
Diaz, Keith M; Muntner, Paul; Levitan, Emily B; Brown, Michael D; Babbitt, Dianne M; Shimbo, Daichi
2014-04-01
As evidence suggests visit-to-visit variability (VVV) of blood pressure (BP) is associated with cardiovascular events and mortality, there is increasing interest in identifying interventions that reduce VVV of BP. We investigated the effects of weight loss and sodium reduction, alone or in combination, on VVV of BP in participants enrolled in phase II of the Trials of Hypertension Prevention. BP readings were taken at 6-month intervals for 36 months in 1820 participants with high-normal DBP who were randomized to weight loss, sodium reduction, combination (weight loss and sodium reduction), or usual care groups. VVV of BP was defined as the SD of BP across six follow-up visits. VVV of SBP was not significantly different between participants randomized to the weight loss (7.2 ± 3.1 mmHg), sodium reduction (7.1 ± 3.0 mmHg), or combined (6.9 ± 2.9 mmHg) intervention groups vs. the usual care group (6.9 ± 2.9 mmHg). In a fully adjusted model, no difference (0.0 ± 0.2 mmHg) in VVV of SBP was present between individuals who successfully maintained their weight loss vs. individuals who did not lose weight during follow-up (P = 0.93). Also, those who maintained a reduced sodium intake throughout follow-up did not have lower VVV of SBP compared to those who did not reduce their sodium intake (0.1 ± 0.3 mmHg; P = 0.77). Results were similar for VVV of DBP. These findings suggest that weight loss and sodium reduction may not be effective interventions for lowering VVV of BP in individuals with high-normal DBP.
Smith, S Abigail; Burton, Samantha L; Kilembe, William; Lakhi, Shabir; Karita, Etienne; Price, Matt; Allen, Susan; Hunter, Eric; Derdeyn, Cynthia A
2016-11-01
A recent study of plasma neutralization breadth in HIV-1 infected individuals at nine International AIDS Vaccine Initiative (IAVI) sites reported that viral load, HLA-A*03 genotype, and subtype C infection were strongly associated with the development of neutralization breadth. Here, we refine the findings of that study by analyzing the impact of the transmitted/founder (T/F) envelope (Env), early Env diversification, and autologous neutralization on the development of plasma neutralization breadth in 21 participants identified during recent infection at two of those sites: Kigali, Rwanda (n = 9) and Lusaka, Zambia (n = 12). Single-genome analysis of full-length T/F Env sequences revealed that all 21 individuals were infected with a highly homogeneous population of viral variants, which were categorized as subtype C (n = 12), A1 (n = 7), or recombinant AC (n = 2). An extensive amino acid sequence-based analysis of variable loop lengths and glycosylation patterns in the T/F Envs revealed that a lower ratio of NXS to NXT-encoded glycan motifs correlated with neutralization breadth. Further analysis comparing amino acid sequence changes, insertions/deletions, and glycan motif alterations between the T/F Env and autologous early Env variants revealed that extensive diversification focused in the V2, V4, and V5 regions of gp120, accompanied by contemporaneous viral escape, significantly favored the development of breadth. These results suggest that more efficient glycosylation of subtype A and C T/F Envs through fewer NXS-encoded glycan sites is more likely to elicit antibodies that can transition from autologous to heterologous neutralizing activity following exposure to gp120 diversification. This initiates an Env-antibody co-evolution cycle that increases neutralization breadth, and is further augmented over time by additional viral and host factors. These findings suggest that understanding how variation in the efficiency of site-specific glycosylation influences
Directory of Open Access Journals (Sweden)
S Abigail Smith
2016-11-01
Full Text Available A recent study of plasma neutralization breadth in HIV-1 infected individuals at nine International AIDS Vaccine Initiative (IAVI sites reported that viral load, HLA-A*03 genotype, and subtype C infection were strongly associated with the development of neutralization breadth. Here, we refine the findings of that study by analyzing the impact of the transmitted/founder (T/F envelope (Env, early Env diversification, and autologous neutralization on the development of plasma neutralization breadth in 21 participants identified during recent infection at two of those sites: Kigali, Rwanda (n = 9 and Lusaka, Zambia (n = 12. Single-genome analysis of full-length T/F Env sequences revealed that all 21 individuals were infected with a highly homogeneous population of viral variants, which were categorized as subtype C (n = 12, A1 (n = 7, or recombinant AC (n = 2. An extensive amino acid sequence-based analysis of variable loop lengths and glycosylation patterns in the T/F Envs revealed that a lower ratio of NXS to NXT-encoded glycan motifs correlated with neutralization breadth. Further analysis comparing amino acid sequence changes, insertions/deletions, and glycan motif alterations between the T/F Env and autologous early Env variants revealed that extensive diversification focused in the V2, V4, and V5 regions of gp120, accompanied by contemporaneous viral escape, significantly favored the development of breadth. These results suggest that more efficient glycosylation of subtype A and C T/F Envs through fewer NXS-encoded glycan sites is more likely to elicit antibodies that can transition from autologous to heterologous neutralizing activity following exposure to gp120 diversification. This initiates an Env-antibody co-evolution cycle that increases neutralization breadth, and is further augmented over time by additional viral and host factors. These findings suggest that understanding how variation in the efficiency of site-specific glycosylation
Visualizing and Understanding Probability and Statistics: Graphical Simulations Using Excel
Gordon, Sheldon P.; Gordon, Florence S.
2009-01-01
The authors describe a collection of dynamic interactive simulations for teaching and learning most of the important ideas and techniques of introductory statistics and probability. The modules cover such topics as randomness, simulations of probability experiments such as coin flipping, dice rolling and general binomial experiments, a simulation…
Feldthusen, Caroline; Dean, Elizabeth; Forsblad-d'Elia, Helena; Mannerkorpi, Kaisa
2016-01-01
To examine effects of person-centered physical therapy on fatigue and related variables in persons with rheumatoid arthritis (RA). Randomized controlled trial. Hospital outpatient rheumatology clinic. Persons with RA aged 20 to 65 years (N=70): intervention group (n=36) and reference group (n=34). The 12-week intervention, with 6-month follow-up, focused on partnership between participant and physical therapist and tailored health-enhancing physical activity and balancing life activities. The reference group continued with regular activities; both groups received usual health care. Primary outcome was general fatigue (visual analog scale). Secondary outcomes included multidimensional fatigue (Bristol Rheumatoid Arthritis Fatigue Multi-Dimensional Questionnaire) and fatigue-related variables (ie, disease, health, function). At posttest, general fatigue improved more in the intervention group than the reference group (P=.042). Improvement in median general fatigue reached minimal clinically important differences between and within groups at posttest and follow-up. Improvement was also observed for anxiety (P=.0099), and trends toward improvements were observed for most multidimensional aspects of fatigue (P=.023-.048), leg strength/endurance (P=.024), and physical activity (P=.023). Compared with the reference group at follow-up, the intervention group improvement was observed for leg strength/endurance (P=.001), and the trends toward improvements persisted for physical (P=.041) and living-related (P=.031) aspects of fatigue, physical activity (P=.019), anxiety (P=.015), self-rated health (P=.010), and self-efficacy (P=.046). Person-centered physical therapy focused on health-enhancing physical activity and balancing life activities showed significant benefits on fatigue in persons with RA. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Aras Dicle
2016-03-01
Full Text Available ABSTRACT: Regular physical activity can cause some long term effects on human body. The purpose of this research was to examine the effect of sport rock climbing (SRC training at 70 % HRmax level on echocardiography (ECHO and heart rate variability (HRV for one hour a day and three days a week in an eight-week period. A total of 19 adults participated in this study voluntarily. The subjects were randomly divided into two groups as experimental (EG and control (CG. While the EG went and did climbing training by using the top-rope method for 60 minutes a day, three days a week for 8 weeks and didn’t join any other physical activity programs, CG didn’t train and take part in any physical activity during the course of the study. Same measurements were repeated at the end of eight weeks. According to the findings, no significant change was observed in any of the ECHO and HRV parameters. However, an improvement was seen in some HRV parameters [average heart rate (HRave, standard deviation of all NN intervals (SDNN, standard deviation of the averages of NN intervals in all five-minute segments of the entire recording (SDANN, percent of difference between adjacent NN intervals that are greater than 50 ms (PNN50, square root of the mean of the sum of the squares of differences between adjacent NN interval (RMSSD] in EG. An exercise program based on SRC should be made more than eight weeks in order to have statistically significant changes with the purpose of observing an improvement in heart structure and functions. Keywords: Echocardiography, heart rate variability, sport rock climbing
Probability workshop to be better in probability topic
Asmat, Aszila; Ujang, Suriyati; Wahid, Sharifah Norhuda Syed
2015-02-01
The purpose of the present study was to examine whether statistics anxiety and attitudes towards probability topic among students in higher education level have an effect on their performance. 62 fourth semester science students were given statistics anxiety questionnaires about their perception towards probability topic. Result indicated that students' performance in probability topic is not related to anxiety level, which means that the higher level in statistics anxiety will not cause lower score in probability topic performance. The study also revealed that motivated students gained from probability workshop ensure that their performance in probability topic shows a positive improvement compared before the workshop. In addition there exists a significance difference in students' performance between genders with better achievement among female students compared to male students. Thus, more initiatives in learning programs with different teaching approaches is needed to provide useful information in improving student learning outcome in higher learning institution.
Parr, Evelyn B; Coffey, Vernon G; Cato, Louise E; Phillips, Stuart M; Burke, Louise M; Hawley, John A
2016-05-01
This study determined the effects of 16-week high-dairy-protein, variable-carbohydrate (CHO) diets and exercise training (EXT) on body composition in men and women with overweight/obesity. One hundred and eleven participants (age 47 ± 6 years, body mass 90.9 ± 11.7 kg, BMI 33 ± 4 kg/m(2) , values mean ± SD) were randomly stratified to diets with either: high dairy protein, moderate CHO (40% CHO: 30% protein: 30% fat; ∼4 dairy servings); high dairy protein, high CHO (55%: 30%: 15%; ∼4 dairy servings); or control (55%: 15%: 30%; ∼1 dairy serving). Energy restriction (500 kcal/day) was achieved through diet (∼250 kcal/day) and EXT (∼250 kcal/day). Body composition was measured using dual-energy X-ray absorptiometry before, midway, and upon completion of the intervention. Eighty-nine (25 M/64 F) of 115 participants completed the 16-week intervention, losing 7.7 ± 3.2 kg fat mass (P composition (fat mass or lean mass) between groups. Compared to a healthy control diet, energy-restricted high-protein diets containing different proportions of fat and CHO confer no advantage to weight loss or change in body composition in the presence of an appropriate exercise stimulus. © 2016 The Obesity Society.
Nam, Sung Sik
2017-06-19
Complex wireless transmission systems require multi-dimensional joint statistical techniques for performance evaluation. Here, we first present the exact closed-form results on order statistics of any arbitrary partial sums of Gamma random variables with the closedform results of core functions specialized for independent and identically distributed Nakagami-m fading channels based on a moment generating function-based unified analytical framework. These both exact closed-form results have never been published in the literature. In addition, as a feasible application example in which our new offered derived closed-form results can be applied is presented. In particular, we analyze the outage performance of the finger replacement schemes over Nakagami fading channels as an application of our method. Note that these analysis results are directly applicable to several applications, such as millimeter-wave communication systems in which an antenna diversity scheme operates using an finger replacement schemes-like combining scheme, and other fading scenarios. Note also that the statistical results can provide potential solutions for ordered statistics in any other research topics based on Gamma distributions or other advanced wireless communications research topics in the presence of Nakagami fading.
Wayne, Peter M; Hausdorff, Jeffrey M; Lough, Matthew; Gow, Brian J; Lipsitz, Lewis; Novak, Vera; Macklin, Eric A; Peng, Chung-Kang; Manor, Brad
2015-01-01
Tai Chi (TC) exercise improves balance and reduces falls in older, health-impaired adults. TC's impact on dual task (DT) gait parameters predictive of falls, especially in healthy active older adults, however, is unknown. To compare differences in usual and DT gait between long-term TC-expert practitioners and age-/gender-matched TC-naïve adults, and to determine the effects of short-term TC training on gait in healthy, non-sedentary older adults. A cross-sectional study compared gait in healthy TC-naïve and TC-expert (24.5 ± 12 years experience) older adults. TC-naïve adults then completed a 6-month, two-arm, wait-list randomized clinical trial of TC training. Gait speed and stride time variability (Coefficient of Variation %) were assessed during 90 s trials of undisturbed and cognitive DT (serial subtractions) conditions. During DT, gait speed decreased (p sensitive metric for monitoring TC's impact on fall risk with healthy older adults.
Statistical methods for solar flare probability forecasting
Vecchia, D. F.; Tryon, P. V.; Caldwell, G. A.; Jones, R. W.
1980-09-01
The Space Environment Services Center (SESC) of the National Oceanic and Atmospheric Administration provides probability forecasts of regional solar flare disturbances. This report describes a statistical method useful to obtain 24 hour solar flare forecasts which, historically, have been subjectively formulated. In Section 1 of this report flare classifications of the SESC and the particular probability forecasts to be considered are defined. In Section 2 we describe the solar flare data base and outline general principles for effective data management. Three statistical techniques for solar flare probability forecasting are discussed in Section 3, viz, discriminant analysis, logistic regression, and multiple linear regression. We also review two scoring measures and suggest the logistic regression approach for obtaining 24 hour forecasts. In Section 4 a heuristic procedure is used to select nine basic predictors from the many available explanatory variables. Using these nine variables logistic regression is demonstrated by example in Section 5. We conclude in Section 6 with band broad suggestions regarding continued development of objective methods for solar flare probability forecasting.
Probability and Statistics The Science of Uncertainty (Revised Edition)
Tabak, John
2011-01-01
Probability and Statistics, Revised Edition deals with the history of probability, describing the modern concept of randomness and examining "pre-probabilistic" ideas of what most people today would characterize as randomness. This revised book documents some historically important early uses of probability to illustrate some very important probabilistic questions. It goes on to explore statistics and the generations of mathematicians and non-mathematicians who began to address problems in statistical analysis, including the statistical structure of data sets as well as the theory of
Entanglement probabilities of polymers: a white noise functional approach
Bernido, C C
2003-01-01
The entanglement probabilities for a highly flexible polymer to wind n times around a straight polymer are evaluated using white noise analysis. To introduce the white noise functional approach, the one-dimensional random walk problem is taken as an example. The polymer entanglement scenario, viewed as a random walk on a plane, is then treated and the entanglement probabilities are obtained for a magnetic flux confined along the straight polymer, and a case where an entangled polymer is subjected to the potential V = f-dot(s)theta. In the absence of the magnetic flux and the potential V, the entanglement probabilities reduce to a result obtained by Wiegel.
Kruppa, Jochen; Liu, Yufeng; Biau, Gérard; Kohler, Michael; König, Inke R; Malley, James D; Ziegler, Andreas
2014-07-01
Probability estimation for binary and multicategory outcome using logistic and multinomial logistic regression has a long-standing tradition in biostatistics. However, biases may occur if the model is misspecified. In contrast, outcome probabilities for individuals can be estimated consistently with machine learning approaches, including k-nearest neighbors (k-NN), bagged nearest neighbors (b-NN), random forests (RF), and support vector machines (SVM). Because machine learning methods are rarely used by applied biostatisticians, the primary goal of this paper is to explain the concept of probability estimation with these methods and to summarize recent theoretical findings. Probability estimation in k-NN, b-NN, and RF can be embedded into the class of nonparametric regression learning machines; therefore, we start with the construction of nonparametric regression estimates and review results on consistency and rates of convergence. In SVMs, outcome probabilities for individuals are estimated consistently by repeatedly solving classification problems. For SVMs we review classification problem and then dichotomous probability estimation. Next we extend the algorithms for estimating probabilities using k-NN, b-NN, and RF to multicategory outcomes and discuss approaches for the multicategory probability estimation problem using SVM. In simulation studies for dichotomous and multicategory dependent variables we demonstrate the general validity of the machine learning methods and compare it with logistic regression. However, each method fails in at least one simulation scenario. We conclude with a discussion of the failures and give recommendations for selecting and tuning the methods. Applications to real data and example code are provided in a companion article (doi:10.1002/bimj.201300077). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Predicting the probability of slip in gait: methodology and distribution study.
Gragg, Jared; Yang, James
2016-01-01
The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.
Probability and statistics: selected problems
Machado, J.A. Tenreiro; Pinto, Carla M. A.
2014-01-01
Probability and Statistics—Selected Problems is a unique book for senior undergraduate and graduate students to fast review basic materials in Probability and Statistics. Descriptive statistics are presented first, and probability is reviewed secondly. Discrete and continuous distributions are presented. Sample and estimation with hypothesis testing are presented in the last two chapters. The solutions for proposed excises are listed for readers to references.
Updating the Jesness Inventory randomness validity scales for the Jesness Inventory-Revised.
Pinsoneault, Terry B
2006-04-01
I updated 2 previously developed randomness scales for the Jesness Inventory (Jesness, 1983)-the Jesness Variable Response Inconsistency scale (J-VRIN) and the Variable Response scale (J-VR)-for the Jesness Inventory-Revised (Jesness-R; Jesness, 2003). I investigated efficacies for those 2 scales and a 3rd randomness scale described in the Jesness-R manual, the Randomness scale (J-RR), by comparing 76 protocols of delinquents, ages 13 to 17 years, screened for probable randomness with a matched-pair MMPI-Adolescent (Butcher et al., 1992) or a Millon Adolescent Clinical Inventory (Millon, Millon, & Davis, 1993), with 100 all-random protocols, and 40 partially random protocols. J-VRIN and J-VR in conjunction successfully detected 98% of the all-random protocols and 83% of the partially random protocols. J-RR successfully detected 19% and 10%, respectively. I report predictive power and overall effectiveness for base rates of .10 and .20.
Ruffini, Nuria; D'Alessandro, Giandomenico; Mariani, Nicolò; Pollastrelli, Alberto; Cardinali, Lucia; Cerritelli, Francesco
2015-01-01
Context: Heart Rate Variability (HRV) indicates how heart rate changes in response to inner and external stimuli. HRV is linked to health status and it is an indirect marker of the autonomic nervous system (ANS) function. Objective: To investigate the influence of osteopathic manipulative treatment (OMT) on cardiac autonomic modulation in healthy subjects, compared with sham therapy and control group. Methods: Sixty-six healthy subjects, both male and female, were included in the present 3-armed randomized placebo controlled within subject cross-over single blinded study. Participants were asymptomatic adults (26.7 ± 8.4 y, 51% male, BMI 18.5 ± 4.8), both smokers and non-smokers and not on medications. At enrollment subjects were randomized in three groups: A, B, C. Standardized structural evaluation followed by a patient need-based osteopathic treatment was performed in the first session of group A and in the second session of group B. Standardized evaluation followed by a protocoled sham treatment was provided in the second session of group A and in the first session of group B. No intervention was performed in the two sessions of group C, acting as a time-control. The trial was registered on clinicaltrials.gov identifier: NCT01908920. Main Outcomes Measures: HRV was calculated from electrocardiography before, during and after the intervention, for a total amount time of 25 min and considering frequency domain as well as linear and non-linear methods as outcome measures. Results: OMT engendered a statistically significant increase of parasympathetic activity, as shown by High Frequency power (p < 0.001), expressed in normalized and absolute unit, and possibly decrease of sympathetic activity, as revealed by Low Frequency power (p < 0.01); results also showed a reduction of Low Frequency/High Frequency ratio (p < 0.001) and Detrended fluctuation scaling exponent (p < 0.05). Conclusions: Findings suggested that OMT can influence ANS activity increasing
Amstadter, Ananda B; Richardson, Lisa; Meyer, Alicia; Sawyer, Genelle; Kilpatrick, Dean G; Tran, Trinh Luong; Trung, Lam Tu; Tam, Nguyen Thanh; Tuan, Tran; Buoi, La Thi; Ha, Tran Thu; Thach, Tran Duc; Gaboury, Mario; Acierno, Ron
2011-02-01
The purpose of the present study was to estimate the prevalence of probable mental health problems in an epidemiologic study of Vietnamese adolescents. A secondary aim was to examine the correlates of probable mental health caseness. Interviewers visited 1,914 households that were randomly selected to participate in a multi-agency study of mental health in select provinces of Vietnam. Semi-structured interviews assessed adolescent mental health problems using the Strengths and Difficulties Questionnaire (SDQ) parent informant version, and additionally the interviewers collected information on demographic variables (age, gender, ethnic group, religious affiliation, social capital). The final sample included data on 1,368 adolescents (aged 11-18 years). The average score on the total problem composite of the SDQ scale was 6.66 (SD=4.89), and 9.1% of the sample was considered a case (n=124). Bivariate analyses were conducted to determine which demographic variables were related to the SDQ case/non-case score. All variables except gender were significant in bivariate analyses, and therefore were entered into a logistic regression. Results indicated that age, religion, and wealth remained significant predictors of probable caseness. Overall, prevalence estimates of mental health problems generated by the SDQ were consistent with those reported in the US and other Western and non-Western samples. Results of the current study suggest some concordance of risk and protective factors between Western and Vietnamese youth (i.e., age and SES).
Fatigue Reliability under Random Loads
DEFF Research Database (Denmark)
Talreja, R.
1979-01-01
We consider the problem of estimating the probability of survival (non-failure) and the probability of safe operation (strength greater than a limiting value) of structures subjected to random loads. These probabilities are formulated in terms of the probability distributions of the loads and the...... propagation stage. The consequences of this behaviour on the fatigue reliability are discussed....
Concepts of microdosimetry II. Probability distributions of the microdosimetric variables.
Kellerer, A M; Chmelevsky, D
1975-10-02
This is the second part of an investigation of microdosimetric concepts relevant to numerical calculations. Two different types of distributions of the microdosimetric quantities are discussed. The sampling procedures are considered, which lead from the initial pattern of enregy transfers, the so-called inchoate distribution, to the distribution of specific energy and their mean values. The dependence of the distributions of specific energy on absorbed dose is related to the sampling procedures.
Directory of Open Access Journals (Sweden)
Coll-de-Tuero Gabriel
2012-08-01
Full Text Available Abstract Background Kidney disease is associated with an increased total mortality and cardiovascular morbimortality in the general population and in patients with Type 2 diabetes. The aim of this study is to determine the prevalence of kidney disease and different types of renal disease in patients with type 2 diabetes (T2DM. Methods Cross-sectional study in a random sample of 2,642 T2DM patients cared for in primary care during 2007. Studied variables: demographic and clinical characteristics, pharmacological treatments and T2DM complications (diabetic foot, retinopathy, coronary heart disease and stroke. Variables of renal function were defined as follows: 1 Microalbuminuria: albumin excretion rate & 30 mg/g or 3.5 mg/mmol, 2 Macroalbuminuria: albumin excretion rate & 300 mg/g or 35 mg/mmol, 3 Kidney disease (KD: glomerular filtration rate according to Modification of Diet in Renal Disease 2 and/or the presence of albuminuria, 4 Renal impairment (RI: glomerular filtration rate 2, 5 Nonalbuminuric RI: glomerular filtration rate 2 without albuminuria and, 5 Diabetic nephropathy (DN: macroalbuminuria or microalbuminuria plus diabetic retinopathy. Results The prevalence of different types of renal disease in patients was: 34.1% KD, 22.9% RI, 19.5% albuminuria and 16.4% diabetic nephropathy (DN. The prevalence of albuminuria without RI (13.5% and nonalbuminuric RI (14.7% was similar. After adjusting per age, BMI, cholesterol, blood pressure and macrovascular disease, RI was significantly associated with the female gender (OR 2.20; CI 95% 1.86–2.59, microvascular disease (OR 2.14; CI 95% 1.8–2.54 and insulin treatment (OR 1.82; CI 95% 1.39–2.38, and inversely associated with HbA1c (OR 0.85 for every 1% increase; CI 95% 0.80–0.91. Albuminuria without RI was inversely associated with the female gender (OR 0.27; CI 95% 0.21–0.35, duration of diabetes (OR 0.94 per year; CI 95% 0.91–0.97 and directly associated with HbA1c (OR 1.19 for every
Directory of Open Access Journals (Sweden)
Jonna C Sandberg
Full Text Available Whole grain has shown potential to prevent obesity, cardiovascular disease and type 2 diabetes. Possible mechanism could be related to colonic fermentation of specific indigestible carbohydrates, i.e. dietary fiber (DF. The aim of this study was to investigate effects on cardiometabolic risk factors and appetite regulation the next day when ingesting rye kernel bread rich in DF as an evening meal.Whole grain rye kernel test bread (RKB or a white wheat flour based bread (reference product, WWB was provided as late evening meals to healthy young adults in a randomized cross-over design. The test products RKB and WWB were provided in two priming settings: as a single evening meal or as three consecutive evening meals prior to the experimental days. Test variables were measured in the morning, 10.5-13.5 hours after ingestion of RKB or WWB. The postprandial phase was analyzed for measures of glucose metabolism, inflammatory markers, appetite regulating hormones and short chain fatty acids (SCFA in blood, hydrogen excretion in breath and subjective appetite ratings.With the exception of serum CRP, no significant differences in test variables were observed depending on length of priming (P>0.05. The RKB evening meal increased plasma concentrations of PYY (0-120 min, P<0.001, GLP-1 (0-90 min, P<0.05 and fasting SCFA (acetate and butyrate, P<0.05, propionate, P = 0.05, compared to WWB. Moreover, RKB decreased blood glucose (0-120 min, P = 0.001, serum insulin response (0-120 min, P<0.05 and fasting FFA concentrations (P<0.05. Additionally, RKB improved subjective appetite ratings during the whole experimental period (P<0.05, and increased breath hydrogen excretion (P<0.001, indicating increased colonic fermentation activity.The results indicate that RKB evening meal has an anti-diabetic potential and that the increased release of satiety hormones and improvements of appetite sensation could be beneficial in preventing obesity. These effects could
Soares-Caldeira, Lúcio F; de Souza, Eberton A; de Freitas, Victor H; de Moraes, Solange M F; Leicht, Anthony S; Nakamura, Fábio Y
2014-10-01
The aim of this study was to investigate whether supplementing regular preseason futsal training with weekly sessions of repeated sprints (RS) training would have positive effects on repeated sprint ability (RSA) and field test performance. Thirteen players from a professional futsal team (22.6 ± 6.7 years, 72.8 ± 8.7 kg, 173.2 ± 6.2 cm) were divided randomly into 2 groups (AddT: n = 6 and normal training group: n = 7). Both groups performed a RSA test, Yo-Yo intermittent recovery test level 1 (YoYo IR1), squat (SJ) and countermovement jumps (CMJ), body composition, and heart rate variability (HRV) measures at rest before and after 4 weeks of preseason training. Athletes weekly stress symptoms were recorded by psychometric responses using the Daily Analysis of Life Demands for Athletes questionnaire and subjective ratings of well-being scale, respectively. The daily training load (arbitrary units) was assessed using the session of rating perceived exertion method. After the preseason training, there were no significant changes for body composition, SJ, CMJ, and RSAbest. The YoYo IR1, RSAmean, RSAworst, and RSAdecreament were significantly improved for both groups (p ≤ 0.05). The HRV parameters improved significantly within both groups (p ≤ 0.05) except for high frequency (HF, absolute and normalized units, [n.u.]), low frequency (LF) (n.u.), and the LF/HF ratio. A moderate effect size for the AddT group was observed for resting heart rate and several HRV measures. Training load and psychometric responses were similar between both groups. Additional RS training resulted in slightly greater positive changes for vagal-related HRV with similar improvements in performance and training stress during the preseason training in futsal players.
Redwine, Laura S; Henry, Brook L; Pung, Meredith A; Wilson, Kathleen; Chinh, Kelly; Knight, Brian; Jain, Shamini; Rutledge, Thomas; Greenberg, Barry; Maisel, Alan; Mills, Paul J
2016-01-01
Stage B, asymptomatic heart failure (HF) presents a therapeutic window for attenuating disease progression and development of HF symptoms, and improving quality of life. Gratitude, the practice of appreciating positive life features, is highly related to quality of life, leading to development of promising clinical interventions. However, few gratitude studies have investigated objective measures of physical health; most relied on self-report measures. We conducted a pilot study in Stage B HF patients to examine whether gratitude journaling improved biomarkers related to HF prognosis. Patients (n = 70; mean [standard deviation] age = 66.2 [7.6] years) were randomized to an 8-week gratitude journaling intervention or treatment as usual. Baseline (T1) assessments included the six-item Gratitude Questionnaire, resting heart rate variability (HRV), and an inflammatory biomarker index. At T2 (midintervention), the six-item Gratitude Questionnaire was measured. At T3 (postintervention), T1 measures were repeated but also included a gratitude journaling task. The gratitude intervention was associated with improved trait gratitude scores (F = 6.0, p = .017, η = 0.10), reduced inflammatory biomarker index score over time (F = 9.7, p = .004, η = 0.21), and increased parasympathetic HRV responses during the gratitude journaling task (F = 4.2, p = .036, η = 0.15), compared with treatment as usual. However, there were no resting preintervention to postintervention group differences in HRV (p values > .10). Gratitude journaling may improve biomarkers related to HF morbidity, such as reduced inflammation; large-scale studies with active control conditions are needed to confirm these findings. Clinicaltrials.govidentifier:NCT01615094.
Probability shapes perceptual precision: A study in orientation estimation.
Jabar, Syaheed B; Anderson, Britt
2015-12-01
Probability is known to affect perceptual estimations, but an understanding of mechanisms is lacking. Moving beyond binary classification tasks, we had naive participants report the orientation of briefly viewed gratings where we systematically manipulated contingent probability. Participants rapidly developed faster and more precise estimations for high-probability tilts. The shapes of their error distributions, as indexed by a kurtosis measure, also showed a distortion from Gaussian. This kurtosis metric was robust, capturing probability effects that were graded, contextual, and varying as a function of stimulus orientation. Our data can be understood as a probability-induced reduction in the variability or "shape" of estimation errors, as would be expected if probability affects the perceptual representations. As probability manipulations are an implicit component of many endogenous cuing paradigms, changes at the perceptual level could account for changes in performance that might have traditionally been ascribed to "attention." (c) 2015 APA, all rights reserved).
Training Teachers to Teach Probability
Batanero, Carmen; Godino, Juan D.; Roa, Rafael
2004-01-01
In this paper we analyze the reasons why the teaching of probability is difficult for mathematics teachers, describe the contents needed in the didactical preparation of teachers to teach probability and analyze some examples of activities to carry out this training. These activities take into account the experience at the University of Granada,…
Expected utility with lower probabilities
DEFF Research Database (Denmark)
Hendon, Ebbe; Jacobsen, Hans Jørgen; Sloth, Birgitte
1994-01-01
An uncertain and not just risky situation may be modeled using so-called belief functions assigning lower probabilities to subsets of outcomes. In this article we extend the von Neumann-Morgenstern expected utility theory from probability measures to belief functions. We use this theory...
Probability in biology: overview of a comprehensive theory of probability in living systems.
Nakajima, Toshiyuki
2013-09-01
Probability is closely related to biological organization and adaptation to the environment. Living systems need to maintain their organizational order by producing specific internal events non-randomly, and must cope with the uncertain environments. These processes involve increases in the probability of favorable events for these systems by reducing the degree of uncertainty of events. Systems with this ability will survive and reproduce more than those that have less of this ability. Probabilistic phenomena have been deeply explored using the mathematical theory of probability since Kolmogorov's axiomatization provided mathematical consistency for the theory. However, the interpretation of the concept of probability remains both unresolved and controversial, which creates problems when the mathematical theory is applied to problems in real systems. In this article, recent advances in the study of the foundations of probability from a biological viewpoint are reviewed, and a new perspective is discussed toward a comprehensive theory of probability for understanding the organization and adaptation of living systems. Copyright © 2013 Elsevier Ltd. All rights reserved.
Probability and statistics for particle physics
Mana, Carlos
2017-01-01
This book comprehensively presents the basic concepts of probability and Bayesian inference with sufficient generality to make them applicable to current problems in scientific research. The first chapter provides the fundamentals of probability theory that are essential for the analysis of random phenomena. The second chapter includes a full and pragmatic review of the Bayesian methods that constitute a natural and coherent framework with enough freedom to analyze all the information available from experimental data in a conceptually simple manner. The third chapter presents the basic Monte Carlo techniques used in scientific research, allowing a large variety of problems to be handled difficult to tackle by other procedures. The author also introduces a basic algorithm, which enables readers to simulate samples from simple distribution, and describes useful cases for researchers in particle physics.The final chapter is devoted to the basic ideas of Information Theory, which are important in the Bayesian me...
Rethinking the learning of belief network probabilities
Energy Technology Data Exchange (ETDEWEB)
Musick, R.
1996-03-01
Belief networks are a powerful tool for knowledge discovery that provide concise, understandable probabilistic models of data. There are methods grounded in probability theory to incrementally update the relationships described by the belief network when new information is seen, to perform complex inferences over any set of variables in the data, to incorporate domain expertise and prior knowledge into the model, and to automatically learn the model from data. This paper concentrates on part of the belief network induction problem, that of learning the quantitative structure (the conditional probabilities), given the qualitative structure. In particular, the current practice of rote learning the probabilities in belief networks can be significantly improved upon. We advance the idea of applying any learning algorithm to the task of conditional probability learning in belief networks, discuss potential benefits, and show results of applying neural networks and other algorithms to a medium sized car insurance belief network. The results demonstrate from 10 to 100% improvements in model error rates over the current approaches.
A philosophical essay on probabilities
Laplace, Marquis de
1996-01-01
A classic of science, this famous essay by ""the Newton of France"" introduces lay readers to the concepts and uses of probability theory. It is of especial interest today as an application of mathematical techniques to problems in social and biological sciences.Generally recognized as the founder of the modern phase of probability theory, Laplace here applies the principles and general results of his theory ""to the most important questions of life, which are, in effect, for the most part, problems in probability."" Thus, without the use of higher mathematics, he demonstrates the application
Introduction to probability and measure
Parthasarathy, K R
2005-01-01
According to a remark attributed to Mark Kac 'Probability Theory is a measure theory with a soul'. This book with its choice of proofs, remarks, examples and exercises has been prepared taking both these aesthetic and practical aspects into account.
Properties and simulation of α-permanental random fields
DEFF Research Database (Denmark)
Møller, Jesper; Rubak, Ege Holger
An α-permanental random field is briefly speaking a model for a collection of random variables with positive associations, where α is a positive number and the probability generating function is given in terms of a covariance or more general function so that density and moment expressions are giv......, and second to study stochastic constructions and simulation techniques, which should provide a useful basis for discussing the statistical aspects in future work. The paper also discusses some examples of α-permanental random fields....
Considerations on a posteriori probability
Directory of Open Access Journals (Sweden)
Corrado Gini
2015-06-01
Full Text Available In this first paper of 1911 relating to the sex ratio at birth, Gini repurposed a Laplace’s succession rule according to a Bayesian version. The Gini's intuition consisted in assuming for prior probability a Beta type distribution and introducing the "method of results (direct and indirect" for the determination of prior probabilities according to the statistical frequency obtained from statistical data.
DECOFF Probabilities of Failed Operations
DEFF Research Database (Denmark)
Gintautas, Tomas
A statistical procedure of estimation of Probabilities of Failed Operations is described and exemplified using ECMWF weather forecasts and SIMO output from Rotor Lift test case models. Also safety factor influence is investigated. DECOFF statistical method is benchmarked against standard Alpha......-factor method defined by (DNV, 2011) and model performance is evaluated. Also, the effects that weather forecast uncertainty has on the output Probabilities of Failure is analysed and reported....
Hewett, Zoe L; Pumpa, Kate L; Smith, Caroline A; Fahey, Paul P; Cheema, Birinder S
2017-04-21
Chronic activation of the stress-response can contribute to cardiovascular disease risk, particularly in sedentary individuals. This study investigated the effect of a Bikram yoga intervention on the high frequency power component of heart rate variability (HRV) and associated cardiovascular disease (CVD) risk factors (i.e. additional domains of HRV, hemodynamic, hematologic, anthropometric and body composition outcome measures) in stressed and sedentary adults. Eligible adults were randomized to an experimental group (n = 29) or a no treatment control group (n = 34). Experimental group participants were instructed to attend three to five supervised Bikram yoga classes per week for 16 weeks at local studios. Outcome measures were assessed at baseline (week 0) and completion (week 17). Sixty-three adults (37.2 ± 10.8 years, 79% women) were included in the intention-to-treat analysis. The experimental group attended 27 ± 18 classes. Analyses of covariance revealed no significant change in the high-frequency component of HRV (p = 0.912, partial η 2 = 0.000) or in any secondary outcome measure between groups over time. However, regression analyses revealed that higher attendance in the experimental group was associated with significant reductions in diastolic blood pressure (p = 0.039; partial η 2 = 0.154), body fat percentage (p = 0.001, partial η 2 = 0.379), fat mass (p = 0.003, partial η 2 = 0.294) and body mass index (p = 0.05, partial η 2 = 0.139). A 16-week Bikram yoga program did not increase the high frequency power component of HRV or any other CVD risk factors investigated. As revealed by post hoc analyses, low adherence likely contributed to the null effects. Future studies are required to address barriers to adherence to better elucidate the dose-response effects of Bikram yoga practice as a medium to lower stress-related CVD risk. Retrospectively registered with Australia New Zealand Clinical Trials Registry ACTRN
Transition Probabilities of Gd I
Bilty, Katherine; Lawler, J. E.; Den Hartog, E. A.
2011-01-01
Rare earth transition probabilities are needed within the astrophysics community to determine rare earth abundances in stellar photospheres. The current work is part an on-going study of rare earth element neutrals. Transition probabilities are determined by combining radiative lifetimes measured using time-resolved laser-induced fluorescence on a slow atom beam with branching fractions measured from high resolution Fourier transform spectra. Neutral rare earth transition probabilities will be helpful in improving abundances in cool stars in which a significant fraction of rare earths are neutral. Transition probabilities are also needed for research and development in the lighting industry. Rare earths have rich spectra containing 100's to 1000's of transitions throughout the visible and near UV. This makes rare earths valuable additives in Metal Halide - High Intensity Discharge (MH-HID) lamps, giving them a pleasing white light with good color rendering. This poster presents the work done on neutral gadolinium. We will report radiative lifetimes for 135 levels and transition probabilities for upwards of 1500 lines of Gd I. The lifetimes are reported to ±5% and the transition probabilities range from 5% for strong lines to 25% for weak lines. This work is supported by the National Science Foundation under grant CTS 0613277 and the National Science Foundation's REU program through NSF Award AST-1004881.
Energy Technology Data Exchange (ETDEWEB)
Dotsenko, Viktor S [Landau Institute for Theoretical Physics, Russian Academy of Sciences, Moscow (Russian Federation)
2011-03-31
In the last two decades, it has been established that a single universal probability distribution function, known as the Tracy-Widom (TW) distribution, in many cases provides a macroscopic-level description of the statistical properties of microscopically different systems, including both purely mathematical ones, such as increasing subsequences in random permutations, and quite physical ones, such as directed polymers in random media or polynuclear crystal growth. In the first part of this review, we use a number of models to examine this phenomenon at a simple qualitative level and then consider the exact solution for one-dimensional directed polymers in a random environment, showing that free energy fluctuations in such a system are described by the universal TW distribution. The second part provides detailed appendix material containing the necessary mathematical background for the first part. (reviews of topical problems)
Establishment probability in newly founded populations
Directory of Open Access Journals (Sweden)
Gusset Markus
2012-06-01
Full Text Available Abstract Background Establishment success in newly founded populations relies on reaching the established phase, which is defined by characteristic fluctuations of the population’s state variables. Stochastic population models can be used to quantify the establishment probability of newly founded populations; however, so far no simple but robust method for doing so existed. To determine a critical initial number of individuals that need to be released to reach the established phase, we used a novel application of the “Wissel plot”, where –ln(1 – P0(t is plotted against time t. This plot is based on the equation P0t=1–c1e–ω1t, which relates the probability of extinction by time t, P0(t, to two constants: c1 describes the probability of a newly founded population to reach the established phase, whereas ω1 describes the population’s probability of extinction per short time interval once established. Results For illustration, we applied the method to a previously developed stochastic population model of the endangered African wild dog (Lycaon pictus. A newly founded population reaches the established phase if the intercept of the (extrapolated linear parts of the “Wissel plot” with the y-axis, which is –ln(c1, is negative. For wild dogs in our model, this is the case if a critical initial number of four packs, consisting of eight individuals each, are released. Conclusions The method we present to quantify the establishment probability of newly founded populations is generic and inferences thus are transferable to other systems across the field of conservation biology. In contrast to other methods, our approach disaggregates the components of a population’s viability by distinguishing establishment from persistence.
Probability based calibration of pressure coefficients
DEFF Research Database (Denmark)
Hansen, Svend Ole; Pedersen, Marie Louise; Sørensen, John Dalsgaard
2015-01-01
not depend on the type of variable action. A probability based calibration of pressure coefficients have been carried out using pressure measurements on the standard CAARC building modelled on scale of 1:383. The extreme pressures measured on the CAARC building model in the wind tunnel have been fitted...... to Gumbel distributions, and these fits are found to represent the data measured with good accuracy. The pressure distributions found have been used in a calibration of partial factors, which should achieve a certain theoretical target reliability index. For a target annual reliability index of 4...
Sampling Random Bioinformatics Puzzles using Adaptive Probability Distributions
DEFF Research Database (Denmark)
Have, Christian Theil; Appel, Emil Vincent; Bork-Jensen, Jette
2016-01-01
We present a probabilistic logic program to generate an educational puzzle that introduces the basic principles of next generation sequencing, gene finding and the translation of genes to proteins following the central dogma in biology. In the puzzle, a secret "protein word" must be found by asse...
Covariate-adjusted Spearman's rank correlation with probability-scale residuals.
Liu, Qi; Li, Chun; Wanga, Valentine; Shepherd, Bryan E
2017-11-13
It is desirable to adjust Spearman's rank correlation for covariates, yet existing approaches have limitations. For example, the traditionally defined partial Spearman's correlation does not have a sensible population parameter, and the conditional Spearman's correlation defined with copulas cannot be easily generalized to discrete variables. We define population parameters for both partial and conditional Spearman's correlation through concordance-discordance probabilities. The definitions are natural extensions of Spearman's rank correlation in the presence of covariates and are general for any orderable random variables. We show that they can be neatly expressed using probability-scale residuals (PSRs). This connection allows us to derive simple estimators. Our partial estimator for Spearman's correlation between X and Y adjusted for Z is the correlation of PSRs from models of X on Z and of Y on Z, which is analogous to the partial Pearson's correlation derived as the correlation of observed-minus-expected residuals. Our conditional estimator is the conditional correlation of PSRs. We describe estimation and inference, and highlight the use of semiparametric cumulative probability models, which allow preservation of the rank-based nature of Spearman's correlation. We conduct simulations to evaluate the performance of our estimators and compare them with other popular measures of association, demonstrating their robustness and efficiency. We illustrate our method in two applications, a biomarker study and a large survey. © 2017, The International Biometric Society.
A Stochastic Collocation Method for Elliptic Partial Differential Equations with Random Input Data
Babuška, Ivo
2010-01-01
This work proposes and analyzes a stochastic collocation method for solving elliptic partial differential equations with random coefficients and forcing terms. These input data are assumed to depend on a finite number of random variables. The method consists of a Galerkin approximation in space and a collocation in the zeros of suitable tensor product orthogonal polynomials (Gauss points) in the probability space, and naturally leads to the solution of uncoupled deterministic problems as in the Monte Carlo approach. It treats easily a wide range of situations, such as input data that depend nonlinearly on the random variables, diffusivity coefficients with unbounded second moments, and random variables that are correlated or even unbounded. We provide a rigorous convergence analysis and demonstrate exponential convergence of the “probability error” with respect to the number of Gauss points in each direction of the probability space, under some regularity assumptions on the random input data. Numerical examples show the effectiveness of the method. Finally, we include a section with developments posterior to the original publication of this work. There we review sparse grid stochastic collocation methods, which are effective collocation strategies for problems that depend on a moderately large number of random variables.
Theory of overdispersion in counting statistics caused by fluctuating probabilities
Energy Technology Data Exchange (ETDEWEB)
Semkow, Thomas M. E-mail: semkow@wadsworth.org
1999-11-01
It is shown that the random Lexis fluctuations of probabilities such as probability of decay or detection cause the counting statistics to be overdispersed with respect to the classical binomial, Poisson, or Gaussian distributions. The generating and the distribution functions for the overdispersed counting statistics are derived. Applications to radioactive decay with detection and more complex experiments are given, as well as distinguishing between the source and background, in the presence of overdispersion. Monte-Carlo verifications are provided.
Theory of overdispersion in counting statistics caused by fluctuating probabilities
Semkow, T M
1999-01-01
It is shown that the random Lexis fluctuations of probabilities such as probability of decay or detection cause the counting statistics to be overdispersed with respect to the classical binomial, Poisson, or Gaussian distributions. The generating and the distribution functions for the overdispersed counting statistics are derived. Applications to radioactive decay with detection and more complex experiments are given, as well as distinguishing between the source and background, in the presence of overdispersion. Monte-Carlo verifications are provided.
Biagini, Francesca
2016-01-01
This book provides an introduction to elementary probability and to Bayesian statistics using de Finetti's subjectivist approach. One of the features of this approach is that it does not require the introduction of sample space – a non-intrinsic concept that makes the treatment of elementary probability unnecessarily complicate – but introduces as fundamental the concept of random numbers directly related to their interpretation in applications. Events become a particular case of random numbers and probability a particular case of expectation when it is applied to events. The subjective evaluation of expectation and of conditional expectation is based on an economic choice of an acceptable bet or penalty. The properties of expectation and conditional expectation are derived by applying a coherence criterion that the evaluation has to follow. The book is suitable for all introductory courses in probability and statistics for students in Mathematics, Informatics, Engineering, and Physics.
On estimating probability of presence from use-availability or presence-background data.
Phillips, Steven J; Elith, Jane
2013-06-01
A fundamental ecological modeling task is to estimate the probability that a species is present in (or uses) a site, conditional on environmental variables. For many species, available data consist of "presence" data (locations where the species [or evidence of it] has been observed), together with "background" data, a random sample of available environmental conditions. Recently published papers disagree on whether probability of presence is identifiable from such presence-background data alone. This paper aims to resolve the disagreement, demonstrating that additional information is required. We defined seven simulated species representing various simple shapes of response to environmental variables (constant, linear, convex, unimodal, S-shaped) and ran five logistic model-fitting methods using 1000 presence samples and 10 000 background samples; the simulations were repeated 100 times. The experiment revealed a stark contrast between two groups of methods: those based on a strong assumption that species' true probability of presence exactly matches a given parametric form had highly variable predictions and much larger RMS error than methods that take population prevalence (the fraction of sites in which the species is present) as an additional parameter. For six species, the former group grossly under- or overestimated probability of presence. The cause was not model structure or choice of link function, because all methods were logistic with linear and, where necessary, quadratic terms. Rather, the experiment demonstrates that an estimate of prevalence is not just helpful, but is necessary (except in special cases) for identifying probability of presence. We therefore advise against use of methods that rely on the strong assumption, due to Lele and Keim (recently advocated by Royle et al.) and Lancaster and Imbens. The methods are fragile, and their strong assumption is unlikely to be true in practice. We emphasize, however, that we are not arguing against
Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models.
Directory of Open Access Journals (Sweden)
Richard R Stein
2015-07-01
Full Text Available Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene-gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design.
Energy Technology Data Exchange (ETDEWEB)
Conover, W.J. [Texas Tech Univ., Lubbock, TX (United States); Cox, D.D. [Rice Univ., Houston, TX (United States); Martz, H.F. [Los Alamos National Lab., NM (United States)
1997-12-01
When using parametric empirical Bayes estimation methods for estimating the binomial or Poisson parameter, the validity of the assumed beta or gamma conjugate prior distribution is an important diagnostic consideration. Chi-square goodness-of-fit tests of the beta or gamma prior hypothesis are developed for use when the binomial sample sizes or Poisson exposure times vary. Nine examples illustrate the application of the methods, using real data from such diverse applications as the loss of feedwater flow rates in nuclear power plants, the probability of failure to run on demand and the failure rates of the high pressure coolant injection systems at US commercial boiling water reactors, the probability of failure to run on demand of emergency diesel generators in US commercial nuclear power plants, the rate of failure of aircraft air conditioners, baseball batting averages, the probability of testing positive for toxoplasmosis, and the probability of tumors in rats. The tests are easily applied in practice by means of corresponding Mathematica{reg_sign} computer programs which are provided.
Short Timescale Variability In The Faint Sky Variability Survey
Morales-Rueda, L.; Groot, P.J.; Augusteijn, T.; Nelemans, G.A.; Vreeswijk, P.M.; Besselaar, E.J.M. van den
2006-01-01
We present the V band variability analysis of the point sources in the Faint Sky Variability Survey on time scales from 24 minutes to tens of days. We find that about one percent of the point sources down to V = 24 are variables. We discuss the variability detection probabilities for each field
Approximation methods in probability theory
Čekanavičius, Vydas
2016-01-01
This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.
VIBRATION ISOLATION SYSTEM PROBABILITY ANALYSIS
Directory of Open Access Journals (Sweden)
Smirnov Vladimir Alexandrovich
2012-10-01
Full Text Available The article deals with the probability analysis for a vibration isolation system of high-precision equipment, which is extremely sensitive to low-frequency oscillations even of submicron amplitude. The external sources of low-frequency vibrations may include the natural city background or internal low-frequency sources inside buildings (pedestrian activity, HVAC. Taking Gauss distribution into account, the author estimates the probability of the relative displacement of the isolated mass being still lower than the vibration criteria. This problem is being solved in the three dimensional space, evolved by the system parameters, including damping and natural frequency. According to this probability distribution, the chance of exceeding the vibration criteria for a vibration isolation system is evaluated. Optimal system parameters - damping and natural frequency - are being developed, thus the possibility of exceeding vibration criteria VC-E and VC-D is assumed to be less than 0.04.
Incompatible Stochastic Processes and Complex Probabilities
Zak, Michail
1997-01-01
The definition of conditional probabilities is based upon the existence of a joint probability. However, a reconstruction of the joint probability from given conditional probabilities imposes certain constraints upon the latter, so that if several conditional probabilities are chosen arbitrarily, the corresponding joint probability may not exist.
VARIABLE SELECTION FOR CENSORED QUANTILE REGRESION.
Wang, Huixia Judy; Zhou, Jianhui; Li, Yi
2013-01-01
Quantile regression has emerged as a powerful tool in survival analysis as it directly links the quantiles of patients' survival times to their demographic and genomic profiles, facilitating the identification of important prognostic factors. In view of limited work on variable selection in the context, we develop a new adaptive-lasso-based variable selection procedure for quantile regression with censored outcomes. To account for random censoring for data with multivariate covariates, we employ the ideas of redistribution-of-mass and e ective dimension reduction. Asymptotically our procedure enjoys the model selection consistency, that is, identifying the true model with probability tending to one. Moreover, as opposed to the existing methods, our new proposal requires fewer assumptions, leading to more accurate variable selection. The analysis of a real cancer clinical trial demonstrates that our procedure can identify and distinguish important factors associated with patient sub-populations characterized by short or long survivals, which is of particular interest to oncologists.
Probability measures on metric spaces
Parthasarathy, K R
2005-01-01
In this book, the author gives a cohesive account of the theory of probability measures on complete metric spaces (which is viewed as an alternative approach to the general theory of stochastic processes). After a general description of the basics of topology on the set of measures, the author discusses regularity, tightness, and perfectness of measures, properties of sampling distributions, and metrizability and compactness theorems. Next, he describes arithmetic properties of probability measures on metric groups and locally compact abelian groups. Covered in detail are notions such as decom
Probability and Statistics: 5 Questions
DEFF Research Database (Denmark)
Probability and Statistics: 5 Questions is a collection of short interviews based on 5 questions presented to some of the most influential and prominent scholars in probability and statistics. We hear their views on the fields, aims, scopes, the future direction of research and how their work fit...... in these respects. Interviews with Nick Bingham, Luc Bovens, Terrence L. Fine, Haim Gaifman, Donald Gillies, James Hawthorne, Carl Hoefer, James M. Joyce, Joseph B. Kadane Isaac Levi, D.H. Mellor, Patrick Suppes, Jan von Plato, Carl Wagner, Sandy Zabell...
Knowledge typology for imprecise probabilities.
Energy Technology Data Exchange (ETDEWEB)
Wilson, G. D. (Gregory D.); Zucker, L. J. (Lauren J.)
2002-01-01
When characterizing the reliability of a complex system there are often gaps in the data available for specific subsystems or other factors influencing total system reliability. At Los Alamos National Laboratory we employ ethnographic methods to elicit expert knowledge when traditional data is scarce. Typically, we elicit expert knowledge in probabilistic terms. This paper will explore how we might approach elicitation if methods other than probability (i.e., Dempster-Shafer, or fuzzy sets) prove more useful for quantifying certain types of expert knowledge. Specifically, we will consider if experts have different types of knowledge that may be better characterized in ways other than standard probability theory.
Probability, Statistics, and Stochastic Processes
Olofsson, Peter
2011-01-01
A mathematical and intuitive approach to probability, statistics, and stochastic processes This textbook provides a unique, balanced approach to probability, statistics, and stochastic processes. Readers gain a solid foundation in all three fields that serves as a stepping stone to more advanced investigations into each area. This text combines a rigorous, calculus-based development of theory with a more intuitive approach that appeals to readers' sense of reason and logic, an approach developed through the author's many years of classroom experience. The text begins with three chapters that d
Probability, statistics, and queueing theory
Allen, Arnold O
1990-01-01
This is a textbook on applied probability and statistics with computer science applications for students at the upper undergraduate level. It may also be used as a self study book for the practicing computer science professional. The successful first edition of this book proved extremely useful to students who need to use probability, statistics and queueing theory to solve problems in other fields, such as engineering, physics, operations research, and management science. The book has also been successfully used for courses in queueing theory for operations research students. This second edit